[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

筛选
英文 中文
Iterative autoassociative memory models for image recalls and pattern classifications 图像回忆和模式分类的迭代自联想记忆模型
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170377
S. Chien, In-Cheol Kim, Dae-Young Kim
{"title":"Iterative autoassociative memory models for image recalls and pattern classifications","authors":"S. Chien, In-Cheol Kim, Dae-Young Kim","doi":"10.1109/IJCNN.1991.170377","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170377","url":null,"abstract":"Autoassociative single-layer neural networks (SLNNs) and multilayer perceptron (MLP) models have been designed to achieve English-character image recall and classification. These two models are trained on the pseudoinverse algorithm and backpropagation learning algorithms, respectively. Improvements on the error-correcting effect of these two models can be achieved by introducing a feedback structure which returns autoassociative image outputs and classification tag fields into the network's inputs. The two models are compared in terms of character image recall and classification capabilities. Experimental results indicative that the MLP network required longer learning time and a smaller number of weights, and showed more stable variations in noise-correcting capability and classification rate with respect to the change of the numbers of stored patterns than the SLNN.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134002750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hopfield network with O(N) complexity using a constrained backpropagation learning 基于约束反向传播学习的复杂度为0 (N)的Hopfield网络
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170606
G. Martinelli, R. Prefetti
{"title":"Hopfield network with O(N) complexity using a constrained backpropagation learning","authors":"G. Martinelli, R. Prefetti","doi":"10.1109/IJCNN.1991.170606","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170606","url":null,"abstract":"A novel associative memory model is presented, which is derived from the Hopfield discrete neural network. Its architecture is greatly simplified because the number of interconnections grows only linearly with the dimensionality of the stored patterns. It makes use of a modified backpropagation algorithm as a learning tool. During the retrieval phase the network operates as an autoassociative BAM (directional associative memory), which searches for a minimum of an appropriate energy function. Computer simulations point out the good performances of the proposed learning method in terms of capacity and number of spurious stable states.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"509 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122759997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Passive sonar processing using neural networks 利用神经网络进行被动声纳处理
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170552
P. Vanhoutte, K. Deegan, K. Khorasani
{"title":"Passive sonar processing using neural networks","authors":"P. Vanhoutte, K. Deegan, K. Khorasani","doi":"10.1109/IJCNN.1991.170552","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170552","url":null,"abstract":"The utilization of a two-stage neural network architecture for the detection of targets in a passive, listen-only sonar is discussed. The two-stage network consists of a first-stage Hopfield network to suppress noise, and a second stage using a bidirectional associative memory (BAM) to make the decision as to whether a target has been detected or not. A second architecture using only a single BAM stage is also presented for illustrative purposes. The target is assumed to be emitting a single tone sinusoid as its signature. The system also assumes only white Gaussian noise perturbation to the signal. It is shown that this network structure provides correct detection at a signal-to-noise ratio of -21 dB, a 6 dB improvement in target detection over a similar network using a perceptron in the second stage. Performance is shown to be limited to the size of the Hopfield network, in the first stage, and to the training set applied to it.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121842021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A neural network approach to on-line identification of non-linear systems 非线性系统在线辨识的神经网络方法
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170404
P. Mills, Albert Y. Zomaya
{"title":"A neural network approach to on-line identification of non-linear systems","authors":"P. Mills, Albert Y. Zomaya","doi":"10.1109/IJCNN.1991.170404","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170404","url":null,"abstract":"The authors introduce three aspects of the neural identification of nonlinear systems. First, a method of extending the error backpropagation neural network to enable it to perform online identification of a system is considered. This enables the investigation of adaptive nonlinear process control based on neural identification. Second, the neural identification has been successfully tested on a complex nonlinear composite system which includes formidable, but realistic, nonlinear process characteristics such as hysteresis. This has helped to demonstrate the general applicability of identification using neural techniques. Third, the novel method of neural identification was compared with online identification based on the well-established linear least-squares technique. The comparison highlights the faster adaptation of linear identification against the higher asymptotic accuracy of neural identification.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121299436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Communication network routing using neural nets-numerical aspects and alternative approaches 使用神经网络的通信网络路由-数值方面和替代方法
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170490
T. Fritsch, W. Mandel
{"title":"Communication network routing using neural nets-numerical aspects and alternative approaches","authors":"T. Fritsch, W. Mandel","doi":"10.1109/IJCNN.1991.170490","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170490","url":null,"abstract":"The authors discuss various approaches of using Hopfield networks in routing problems in computer communication networks. It is shown that the classical approach using the original Hopfield network leads to evident numerical problems, and hence is not practicable. The heuristic choice of the Lagrange parameters, as presented in the literature, can result in incorrect solutions for variable dimensions, or is very time consuming, in order to search the correct parameter sets. The modified method using eigenvalue analysis using predetermined parameters yields recognizable improvements. On the other hand, it is not able to produce correct solutions for different topologies with higher dimensions. From a numerical viewpoint, determining the eigenvalues of the connection matrix involves severe problems, such as stiffness, and shows evident instability of the simulated differential equations. The authors present possible alternative approaches such as the self-organizing feature map and modifications of the Hopfield net, e.g. mean field annealing, and the Pottglas model.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116069536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Polynomial functions can be realized by finite size multilayer feedforward neural networks 多项式函数可以用有限大小的多层前馈神经网络来实现
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170425
N. Toda, Ken-ichi Funahashi, S. Usui
{"title":"Polynomial functions can be realized by finite size multilayer feedforward neural networks","authors":"N. Toda, Ken-ichi Funahashi, S. Usui","doi":"10.1109/IJCNN.1991.170425","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170425","url":null,"abstract":"The authors present an analytic method to construct polynomial functions by multilayer feedforward neural networks. Because the polynomials consist of multiplication operations and linear weighted summations, if the multiplier can be constructed by a neural network, any polynomial function can be represented by a neural network (a single unit already has the function of weighted summation). The authors try to construct a neural network module with one hidden layer that works as a multiplier (it is referred to as a neural multiplier module). It is shown, in principle, that the multiplier can be approximated by a neural network with four hidden units, with arbitrary accuracy on a bounded closed set.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"382 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116637575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
An object recognition system using self-organising neural networks 基于自组织神经网络的目标识别系统
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170778
V. Chandrasekaran, M. Palaniswami, T. Caelli
{"title":"An object recognition system using self-organising neural networks","authors":"V. Chandrasekaran, M. Palaniswami, T. Caelli","doi":"10.1109/IJCNN.1991.170778","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170778","url":null,"abstract":"An object recognition system is proposed using a self-organizing neural network as a basic module for the processing of feature vectors to provide evidence for the recognition state. The modules are integrated to represent various instances of the object scene for which the features are known a priori. The basic architecture of the system proposed was configured to accept a single feature vector or multiple feature vectors at a time. The system was trained on a hypothetical three-object data set for recognition capabilities on object scenes with and without occlusion. The simulation results confirmed the success of the proposed approach.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"156 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116911947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new learning approach to enhance the storage capacity of the Hopfield model 一种新的学习方法来提高Hopfield模型的存储容量
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170650
H. Oh, S. Kothari
{"title":"A new learning approach to enhance the storage capacity of the Hopfield model","authors":"H. Oh, S. Kothari","doi":"10.1109/IJCNN.1991.170650","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170650","url":null,"abstract":"A new learning technique is introduced to solve the problem of the small and restrictive storage capacity of the Hopfield model. The technique exploits the maximum storage capacity. It fails only if appropriate weights do not exist to store the given set of patterns. The technique is not based on the concept of function minimization. Thus, there is no danger of getting stuck in local minima. The technique is free from the step size and moving target problems. Learning speed is very fast and depends on difficulties presented by the training patterns and not so much on the parameters of the algorithm. The technique is scalable. Its performance does not degrade as the problem size increases. An extensive analysis of the learning technique is provided through simulation results.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115134102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Discovering production rules with higher order neural networks: a case study. II 用高阶神经网络发现生产规则:一个案例研究。2
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170457
A. Kowalczyk, H. Ferrá, K. Gardiner
{"title":"Discovering production rules with higher order neural networks: a case study. II","authors":"A. Kowalczyk, H. Ferrá, K. Gardiner","doi":"10.1109/IJCNN.1991.170457","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170457","url":null,"abstract":"It is demonstrated by example that neural networks can be used successfully for automatic extraction of production rules from empirical data. The case considered is a popular public domain database of 8124 mushrooms. With the use of a term selection algorithm, a number of very accurate mask perceptrons (a kind of high-order network or polynomial classifier) have been developed. Then rounding of synaptic weights was applied, leading in many cases to networks with integer weights which were subsequently converted to production rules. It is also shown that focusing of network attention onto a smaller subset of useful attributes ordered with respect to their decreasing discriminating abilities helps significantly in accurate rule generation.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115194276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Improving error tolerance of self-organizing neural nets 提高自组织神经网络的容错性
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170279
F. Sha, Q. Gan
{"title":"Improving error tolerance of self-organizing neural nets","authors":"F. Sha, Q. Gan","doi":"10.1109/IJCNN.1991.170279","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170279","url":null,"abstract":"A hybrid neural net (HNN) combining the network introduced by G.A. Carpenter and S. Grossberg (1987, 1988) and the Hopfield associative memory (HAM) is developed. HAM diminishes noise in samples and provides ART1 samples as inputs. In order to match the capacity of HAM with that of ART1, a new recalling algorithm (NHAM) is also introduced to enlarge the capacity of HAM. Based on NHAM and HNN, a revised version of HNN (RHNN) is introduced. The difference between RHNN and HNN is that RHNN has feedback loops, while HNN has only feedforward paths. The ART1 in RHNN supplies information for HAM to recall memories. Computer simulation demonstrated that RHNN has several advantages.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"447 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115280576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信