IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)最新文献

筛选
英文 中文
Bayesian neural networks with correlating residuals 相关残差贝叶斯神经网络
Aki Vehtari, J. Lampinen
{"title":"Bayesian neural networks with correlating residuals","authors":"Aki Vehtari, J. Lampinen","doi":"10.1109/IJCNN.1999.832623","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832623","url":null,"abstract":"In a multivariate regression problem it is often assumed that residuals of outputs are independent of each other. In many applications a more realistic model would allow dependencies between the outputs. In this paper we show how a Bayesian treatment using the Markov chain Monte Carlo method can allow for a full covariance matrix with multilayer perceptron neural network.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126464090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A new scheme for extracting multi-temporal sequence patterns 一种新的多时间序列模式提取方法
P. Hong, S. Ray, Thomas Huang
{"title":"A new scheme for extracting multi-temporal sequence patterns","authors":"P. Hong, S. Ray, Thomas Huang","doi":"10.1109/IJCNN.1999.833494","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.833494","url":null,"abstract":"This paper proposes a new scheme for unsupervised multi-temporal sequence pattern extraction. The main idea of the scheme is iterative coarse to fine data examination. We decompose a pattern into ambiguous subpatterns and distinguishable sub-patterns (DSP). In each iteration, we coarsely examine the training temporal signal sequence by training an Elman neural network. The trained Elman network is used to select the DSP candidate set. Then, we look at the training signals around the DSPs and use maximum likelihood criteria to expand them into whole patterns. We cut out the new found patterns from the training signal sequence and repeat the whole procedure until no more new patterns are found. The experimental result shows this method promising.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126484123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Knowledge matching model with dynamic weights based on the primary visual cortex 基于初级视觉皮层的动态权重知识匹配模型
Yuan Bo, Liming Zhang
{"title":"Knowledge matching model with dynamic weights based on the primary visual cortex","authors":"Yuan Bo, Liming Zhang","doi":"10.1109/IJCNN.1999.831477","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831477","url":null,"abstract":"Even in the field of biology, the principle of how knowledge can be efficiently utilized in a neural network has not been solved perfectly. This paper discusses a new model based on the structure of the V1 area in biological visual system for knowledge matching. The contour of the object is stored as knowledge in the form of chain code. During the matching process, the chain code is presented to control the dynamics of the neuron in a V1-similar structure neural network. Cooperating with the dynamic weights, active neurons will reconstruct the object's contour at the place where the object is in the visual field. This model is an exploration of how knowledge is represented and utilized in the brain.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128077829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A gain perturbation method to improve the generalization performance for the recurrent neural network misfire detector 一种提高递归神经网络失火检测器泛化性能的增益摄动方法
Pu Sun, K. Marko
{"title":"A gain perturbation method to improve the generalization performance for the recurrent neural network misfire detector","authors":"Pu Sun, K. Marko","doi":"10.1109/IJCNN.1999.832599","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832599","url":null,"abstract":"A common constraint on the application of neural networks to diagnostics and control of mass manufactured systems is that training sets can only be obtained from limited number of system exemplars. As a consequence the variations of dynamic response in the systems pose a problem in obtaining excellent performance for the trained neural networks. In this paper we describe a gain perturbation method (GPM) to improve the generalization performance in neural network diagnostic monitors trained on a data set obtained from one individual vehicle and rested on data from the another vehicle. The results show significant improvement in the generalization performance for neural networks trained with GPM over the ones trained without GPM.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126014633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Priority ordered architecture of neural networks 神经网络的优先级排序结构
Wang Shoujue, Lu Huaxiang, C. Xiangdong, Li Yujian
{"title":"Priority ordered architecture of neural networks","authors":"Wang Shoujue, Lu Huaxiang, C. Xiangdong, Li Yujian","doi":"10.1109/IJCNN.1999.831054","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831054","url":null,"abstract":"In the architecture introduced, outputs of neurons (or neural nets) have different priorities, beside the differences in topological position and value of these outputs. We discuss how priority ordered neural networks (PONNs) have similarity to knowledge representation in the human brain. Also a general mathematical description of the PONN is introduced. The priority ordered single layer perceptron (POSLP) and the priority ordered radial basis function nets (PORBFN) for pattern classification are analyzed. The experiment shows that the learning speed of the POSLP and PORBFN are much faster than that of the multilayered feedforward neural networks with existing BP algorithms.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115911252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The optimal value of self-connection 自连接的最佳值
D. Gorodnichy
{"title":"The optimal value of self-connection","authors":"D. Gorodnichy","doi":"10.1109/IJCNN.1999.831579","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.831579","url":null,"abstract":"The fact that reducing self-connections improves the performance of the autoassociative networks built by the pseudo-inverse learning rule is known already for quite a while, but has not been studied in detail yet. In particular, it is known that decreasing of self-connection increases the direct attraction radius of the network, and it is also known that it increases the number of spurious dynamic attractors. Thus, it has been concluded that the optimal value of the coefficient of self-connection reduction D lies somewhere in the range (0; 0.5). This paper gives an explicit answer to the question on what is the optimal value of the self-connection reduction. It shows how the indirect attraction radius increases with the decrease of D. The summary of the results pertaining to the phenomenon is presented.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130242241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Quadrant-distance graphs: a method for visualizing neural network weight spaces 象限距离图:一种可视化神经网络权重空间的方法
B. Linnell
{"title":"Quadrant-distance graphs: a method for visualizing neural network weight spaces","authors":"B. Linnell","doi":"10.1109/IJCNN.1999.832624","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832624","url":null,"abstract":"One of the major drawbacks to neural networks is the inability of the user to understand what is happening inside the network. Quadrant-distance (QD) graphs allow the user to graphically display a network's weight vector at any point in training, for networks of any size. This allows the user to quickly and easily identify similarities or differences between solution sets. QD graphs may also be used for a variety of other analysis functions, such as comparing initial weights to final weights, and observing the path of the network as it finds a solution.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A comparison of radial basis function networks and fuzzy neural logic networks for autonomous star recognition 径向基函数网络与模糊神经逻辑网络在自主星识别中的比较
J. Dickerson, J. Hong, Z. Cox, D. Bailey
{"title":"A comparison of radial basis function networks and fuzzy neural logic networks for autonomous star recognition","authors":"J. Dickerson, J. Hong, Z. Cox, D. Bailey","doi":"10.1109/IJCNN.1999.836167","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836167","url":null,"abstract":"Autonomous star recognition requires that many similar patterns must be distinguished from one another with a small training set. Since these systems are implemented on-board a spacecraft, the network needs to have low memory requirements and minimal computational complexity. Fast training speeds are also important since star sensor capabilities change over time. This paper compares two networks that meet these needs: radial basis function networks and neural logic networks. Neural logic networks performed much better than radial basis function networks in terms of recognition accuracy, memory needed, and training speed.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130473710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sequential learning for associative memory using Kohonen feature map 基于Kohonen特征映射的联想记忆顺序学习
Takeo Yamada, M. Hattori, Masayuki Morisawa, Hiroshi Ito
{"title":"Sequential learning for associative memory using Kohonen feature map","authors":"Takeo Yamada, M. Hattori, Masayuki Morisawa, Hiroshi Ito","doi":"10.1109/IJCNN.1999.832675","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.832675","url":null,"abstract":"We propose a sequential learning algorithm for an associative memory based on Kohonen feature map. In order to store new information without retraining weights on previously learned information, weights fixed neurons and weights semi-fixed neurons are used in the proposed algorithm. Owing to the semi-fixed neurons, the associative memory becomes structurally robust. Moreover, it has the following features: 1) it is robust for noisy inputs; 2) it has high storage capacity; and 3) it casts deal with one-to-many associations.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134232967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Hybrid fuzzy logic and neural network model for fingerprint minutiae extraction 指纹特征提取的混合模糊逻辑和神经网络模型
V. Sagar, J. Koh
{"title":"Hybrid fuzzy logic and neural network model for fingerprint minutiae extraction","authors":"V. Sagar, J. Koh","doi":"10.1109/IJCNN.1999.836178","DOIUrl":"https://doi.org/10.1109/IJCNN.1999.836178","url":null,"abstract":"This paper presents the research into the use of fuzzy-neuro technology in automated finger print recognition for the extraction of fingerprint features, known as minutiae. The work presented here is an addendum to work carried out earlier by Sagar et al. (1995).","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134440158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 48
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信