{"title":"图像回忆和模式分类的迭代自联想记忆模型","authors":"S. Chien, In-Cheol Kim, Dae-Young Kim","doi":"10.1109/IJCNN.1991.170377","DOIUrl":null,"url":null,"abstract":"Autoassociative single-layer neural networks (SLNNs) and multilayer perceptron (MLP) models have been designed to achieve English-character image recall and classification. These two models are trained on the pseudoinverse algorithm and backpropagation learning algorithms, respectively. Improvements on the error-correcting effect of these two models can be achieved by introducing a feedback structure which returns autoassociative image outputs and classification tag fields into the network's inputs. The two models are compared in terms of character image recall and classification capabilities. Experimental results indicative that the MLP network required longer learning time and a smaller number of weights, and showed more stable variations in noise-correcting capability and classification rate with respect to the change of the numbers of stored patterns than the SLNN.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Iterative autoassociative memory models for image recalls and pattern classifications\",\"authors\":\"S. Chien, In-Cheol Kim, Dae-Young Kim\",\"doi\":\"10.1109/IJCNN.1991.170377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autoassociative single-layer neural networks (SLNNs) and multilayer perceptron (MLP) models have been designed to achieve English-character image recall and classification. These two models are trained on the pseudoinverse algorithm and backpropagation learning algorithms, respectively. Improvements on the error-correcting effect of these two models can be achieved by introducing a feedback structure which returns autoassociative image outputs and classification tag fields into the network's inputs. The two models are compared in terms of character image recall and classification capabilities. Experimental results indicative that the MLP network required longer learning time and a smaller number of weights, and showed more stable variations in noise-correcting capability and classification rate with respect to the change of the numbers of stored patterns than the SLNN.<<ETX>>\",\"PeriodicalId\":211135,\"journal\":{\"name\":\"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1991-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1991.170377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1991.170377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Iterative autoassociative memory models for image recalls and pattern classifications
Autoassociative single-layer neural networks (SLNNs) and multilayer perceptron (MLP) models have been designed to achieve English-character image recall and classification. These two models are trained on the pseudoinverse algorithm and backpropagation learning algorithms, respectively. Improvements on the error-correcting effect of these two models can be achieved by introducing a feedback structure which returns autoassociative image outputs and classification tag fields into the network's inputs. The two models are compared in terms of character image recall and classification capabilities. Experimental results indicative that the MLP network required longer learning time and a smaller number of weights, and showed more stable variations in noise-correcting capability and classification rate with respect to the change of the numbers of stored patterns than the SLNN.<>