J. Figueroa-Nazuno, G. Perez-Elizalde, E. Vargas-Medina, M. G. Raggi-Gonzalez
{"title":"Information representation analysis in a neural network","authors":"J. Figueroa-Nazuno, G. Perez-Elizalde, E. Vargas-Medina, M. G. Raggi-Gonzalez","doi":"10.1109/IJCNN.1991.170721","DOIUrl":null,"url":null,"abstract":"The authors study the mathematical behavior of the hidden layer of a generalized delta rule type neural network (GDR) by analyzing the weights and thresholds in the network, when it learned and didn't learn, in a typical situation in neurocomputation. The GDR was used in a C language program. There are three representation hypotheses: (a) the local, which states that information encoding takes place in local parts of the network; (b) the generalized, which states that information is located in extended areas in the network; and (c) the global, which states that total behavior represents the information in the networks. Several intensive computations were carried out to analyze the neural network internal behavior in situations where it did and didn't learn. The information shows clearly that representation as a global behavior in the hidden layer is responsible for learning, and not local behavior situations.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1991.170721","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The authors study the mathematical behavior of the hidden layer of a generalized delta rule type neural network (GDR) by analyzing the weights and thresholds in the network, when it learned and didn't learn, in a typical situation in neurocomputation. The GDR was used in a C language program. There are three representation hypotheses: (a) the local, which states that information encoding takes place in local parts of the network; (b) the generalized, which states that information is located in extended areas in the network; and (c) the global, which states that total behavior represents the information in the networks. Several intensive computations were carried out to analyze the neural network internal behavior in situations where it did and didn't learn. The information shows clearly that representation as a global behavior in the hidden layer is responsible for learning, and not local behavior situations.<>