[Proceedings] 1991 IEEE International Joint Conference on Neural Networks最新文献

筛选
英文 中文
Fault tolerance of lateral interaction networks 横向相互作用网络的容错性
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170654
G. Bolt
{"title":"Fault tolerance of lateral interaction networks","authors":"G. Bolt","doi":"10.1109/IJCNN.1991.170654","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170654","url":null,"abstract":"An examination of the fault tolerance properties of lateral interaction networks is presented. The general concept of a soft problem is discussed along with the resulting implications for reliability. Fault injection experiments were performed using several input datasets with differing characteristics in conjunction with various combinations of network parameters. It was found that a high degree of tolerance to faults existed and that the reliability of operation degraded smoothly. This result was independent of both the nature of the input dataset and to a lesser extent of the choice of network parameters.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123675711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Inherent structure detection by neural sequential associator 基于神经序列关联器的固有结构检测
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170704
I. Matsuba
{"title":"Inherent structure detection by neural sequential associator","authors":"I. Matsuba","doi":"10.1109/IJCNN.1991.170704","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170704","url":null,"abstract":"A sequential associator based on a feedback multilayer neural network is proposed to analyze inherent structures in a sequence generated by a nonlinear dynamical system and to predict a future sequence based on these structures. The network represents time correlations in the connection weights during learning. It is capable of detecting the inherent structure and explaining the behavior of systems. The structure of the neural sequential associator, inherent structure detection, and the optimal network size based on the use of an information criterion are discussed.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126023722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adjustment of the basin size in autoassociative memories by use of the BPTT technique 利用BPTT技术调整自联想记忆中的记忆盆大小
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170674
T. Hatanaka, Y. Nishikawa
{"title":"Adjustment of the basin size in autoassociative memories by use of the BPTT technique","authors":"T. Hatanaka, Y. Nishikawa","doi":"10.1109/IJCNN.1991.170674","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170674","url":null,"abstract":"An auto-associative memory is constructed in a recurrent network whose connection matrix is determined by use of backpropagation through time (BPTT). Through several computer simulations, basins of the memory generated by this method are compared with those generated by the conventional methods. In particular, the ability of the BPTT to adjust the basin size is investigated in detail.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124657362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synaptic and somatic learning and adaptation in fuzzy neural systems 模糊神经系统的突触和躯体学习与适应
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170510
M. Gupta, J. Qi
{"title":"Synaptic and somatic learning and adaptation in fuzzy neural systems","authors":"M. Gupta, J. Qi","doi":"10.1109/IJCNN.1991.170510","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170510","url":null,"abstract":"An attempt is made to establish some basic models for fuzzy neurons. Three types of fuzzy neural models are proposed. The neuron I is described by logical equations or if-then rules; its inputs are either fuzzy sets or crisp values. The neuron II, with numerical inputs, and the neuron III, with fuzzy inputs, are considered to be a simple extension of nonfuzzy neurons. A few methods of how these neurons change themselves during learning to improve their performance are also given. The notion of synaptic and somatic learning and adaptation is also introduced, which seems to be a powerful approach for developed a new class of fuzzy neural networks. Such an approach may have application in the processing of fuzzy information and the design of expert systems with learning and adaptation abilities.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129589533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient question answering in a hybrid system 混合系统中的高效问答
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170447
J. Diederich, D. Long
{"title":"Efficient question answering in a hybrid system","authors":"J. Diederich, D. Long","doi":"10.1109/IJCNN.1991.170447","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170447","url":null,"abstract":"A connectionist model for answering open-class questions in the context of text processing is presented. The system answers questions from different question categories, such as how, why, and consequence questions. The system responds to a question by generating a set of possible answers that are weighted according to their plausibility. Search is performed by means of a massively parallel directed spreading activation process. The search process operates on several knowledge sources (i.e., connectionist networks) that are learned or explicitly built-in. Spreading activation involves the use of signature messages, which are numeric values that are propagated throughout the networks and identify a particular question category (this makes the system hybrid). Binder units that gate the flow of activation between textual units receive these signatures and change their states. That is, the binder units either block the spread of activation or allow the flow of activation in a certain direction. The process results in a pattern of activation that represents a set of candidate answers based on available knowledge sources.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129654924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A cognitive framework for hybrid systems 混合系统的认知框架
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170449
J. Wallace, K. Bluff
{"title":"A cognitive framework for hybrid systems","authors":"J. Wallace, K. Bluff","doi":"10.1109/IJCNN.1991.170449","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170449","url":null,"abstract":"The authors explore the potential of a specific cognitive architecture to provide the relational mechanism needed to capitalize on the respective strengths of symbolic and nonsymbolic modes of representation, and on the benefits of their interaction in achieving machine intelligence. This architecture is strongly influenced by the BAIRN system of I. Wallace et al. (1987) which provides a general theory of human cognition with a particular emphasis on the function of learning. This cognitive architecture is being used in a generic approach to the aspects of human performance designated by the term situation awareness.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127448676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The handling of don't care attributes 对不在乎属性的处理
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170539
Hahn-Ming Lee, Ching-Chi Hsu
{"title":"The handling of don't care attributes","authors":"Hahn-Ming Lee, Ching-Chi Hsu","doi":"10.1109/IJCNN.1991.170539","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170539","url":null,"abstract":"A critical factor that affects the performance of neural network training algorithms and the generalization of trained networks is the training instances. The authors consider the handling of don't care attributes in training instances. Several approaches are discussed and their experimental results are presented. The following approaches are considered: (1) replace don't care attributes with a fixed value; (2) replace don't care attributes with their maximum or minimum encoded values; (3) replace don't care attributes with their maximum and minimum encoded values; and (4) replace don't care attributes with all their possible encoded values.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127471345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Applications of the pRAM pRAM的应用
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170348
T. Clarkson, D. Gorse, Y. Guan, J.G. Taylor
{"title":"Applications of the pRAM","authors":"T. Clarkson, D. Gorse, Y. Guan, J.G. Taylor","doi":"10.1109/IJCNN.1991.170348","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170348","url":null,"abstract":"The probabilistic RAM (pRAM) neuron is highly nonlinear and stochastic, and it is hardware-realizable. The following applications of the pRAM are discussed: the processing of half-tone images, the generation of topological maps, the storage of temporal sequences, and the recognition of regular grammars.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127517022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Optimally generalizing neural networks 最优泛化神经网络
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170648
H. Ogawa, E. Oja
{"title":"Optimally generalizing neural networks","authors":"H. Ogawa, E. Oja","doi":"10.1109/IJCNN.1991.170648","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170648","url":null,"abstract":"The problem of approximating a real function f of L variables, given only in terms of its values y/sub 1/,. . .,y/sub M/ at a small set of sample points x/sub 1/,. . .,x/sub M/ in R/sup L/, is studied in the context of multilayer neural networks. Using the theory of reproducing kernels of Hilbert spaces, it is shown that this problem is the inverse of a linear model relating the values y/sub m/ to the function f itself. The authors consider the least-mean-square training criterion for nonlinear multilayer neural network architectures that learn the training set completely. The generalization property of a neural network is defined in terms of function reconstruction and the concept of the optimally generalizing neural network (OGNN) is proposed. It is a network that minimizes a criterion given in terms of the true error between the original function f and the reconstruction f/sub 1/ in the function space, instead of minimizing the error at the sample points only. As an example of the OGNN, a projection filter (PF) criterion is considered and the PFGNN is introduced. The network is of the two-layer nonlinear-linear type.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129973990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Optical inner-product implementations for multi-layer BAM with 2-dimensional patterns 二维模式多层BAM的光学内积实现
[Proceedings] 1991 IEEE International Joint Conference on Neural Networks Pub Date : 1991-11-18 DOI: 10.1109/IJCNN.1991.170675
Hyuek-Jae Lee, Soo-Young Lee, C. Park, S. Shin
{"title":"Optical inner-product implementations for multi-layer BAM with 2-dimensional patterns","authors":"Hyuek-Jae Lee, Soo-Young Lee, C. Park, S. Shin","doi":"10.1109/IJCNN.1991.170675","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170675","url":null,"abstract":"The authors present an optical inner-product architecture for MBAM (multi-layer bidirectional associative memory) with two-dimensional input and output patterns. The proposed architecture utilizes compact solid modules for single-layer feedforward networks, which may be cascaded for MBAM. Instead of analog interconnection weights the inner-product scheme stores input and output patterns. For binary input and output patterns this inner-product scheme requires binary spatial light modulators only, and is scalable to very large-size implementations. Unlike optical neural networks for one-dimensional patterns, multifocus holograms and lenslet arrays become essential components in these modules. The performance of the MBAM was demonstrated by an electrooptic inner-product implementation for the exclusive-OR problem.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125650628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信