{"title":"对称神经网络及其实例","authors":"Hee-Seung Na, Youngjin Park","doi":"10.1109/IJCNN.1992.287176","DOIUrl":null,"url":null,"abstract":"The concept of a symmetric neural network, which is not only structurally symmetric but also has symmetric weight distribution, is presented. The concept is further expanded to constrained networks, which may also be applied to some nonsymmetric problems in which there is some prior knowledge of the weight distribution pattern. Because these neural networks cannot be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. Three examples are shown to demonstrate the applicability of the proposed ideas. Use of the proposed concepts results in improved system performance, reduced network dimension, less computational load, and improved learning for the examples considered.<<ETX>>","PeriodicalId":286849,"journal":{"name":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1992-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Symmetric neural networks and its examples\",\"authors\":\"Hee-Seung Na, Youngjin Park\",\"doi\":\"10.1109/IJCNN.1992.287176\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The concept of a symmetric neural network, which is not only structurally symmetric but also has symmetric weight distribution, is presented. The concept is further expanded to constrained networks, which may also be applied to some nonsymmetric problems in which there is some prior knowledge of the weight distribution pattern. Because these neural networks cannot be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. Three examples are shown to demonstrate the applicability of the proposed ideas. Use of the proposed concepts results in improved system performance, reduced network dimension, less computational load, and improved learning for the examples considered.<<ETX>>\",\"PeriodicalId\":286849,\"journal\":{\"name\":\"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1992-06-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1992.287176\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings 1992] IJCNN International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1992.287176","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The concept of a symmetric neural network, which is not only structurally symmetric but also has symmetric weight distribution, is presented. The concept is further expanded to constrained networks, which may also be applied to some nonsymmetric problems in which there is some prior knowledge of the weight distribution pattern. Because these neural networks cannot be trained by the conventional training algorithm, which destroys the weight structure of the neural networks, a proper training algorithm is suggested. Three examples are shown to demonstrate the applicability of the proposed ideas. Use of the proposed concepts results in improved system performance, reduced network dimension, less computational load, and improved learning for the examples considered.<>