A. Gopalakrishnan, Xiangping Jiang, Mu-Song Chen, M. Manry
{"title":"多层感知器中有效模式存储的建设性证明","authors":"A. Gopalakrishnan, Xiangping Jiang, Mu-Song Chen, M. Manry","doi":"10.1109/ACSSC.1993.342540","DOIUrl":null,"url":null,"abstract":"We show that the pattern storage capability of the Gabor polynomial is much higher than the commonly used lower bound on multi-layer perceptron (MLP) pattern storage. We also show that multi-layer perceptron networks having second and third degree polynomial activations can be constructed which efficiently implement Gabor polynomials and therefore have the same high pattern storage capability. The polynomial networks can be mapped to conventional sigmoidal MLPs having the same efficiency. It is shown that training techniques like output weight optimization and conjugate gradient attain only the lower bound of pattern storage. Certainly they are not the final solutions to the MLP training problem.<<ETX>>","PeriodicalId":266447,"journal":{"name":"Proceedings of 27th Asilomar Conference on Signals, Systems and Computers","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1993-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Constructive proof of efficient pattern storage in the multi-layer perceptron\",\"authors\":\"A. Gopalakrishnan, Xiangping Jiang, Mu-Song Chen, M. Manry\",\"doi\":\"10.1109/ACSSC.1993.342540\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We show that the pattern storage capability of the Gabor polynomial is much higher than the commonly used lower bound on multi-layer perceptron (MLP) pattern storage. We also show that multi-layer perceptron networks having second and third degree polynomial activations can be constructed which efficiently implement Gabor polynomials and therefore have the same high pattern storage capability. The polynomial networks can be mapped to conventional sigmoidal MLPs having the same efficiency. It is shown that training techniques like output weight optimization and conjugate gradient attain only the lower bound of pattern storage. Certainly they are not the final solutions to the MLP training problem.<<ETX>>\",\"PeriodicalId\":266447,\"journal\":{\"name\":\"Proceedings of 27th Asilomar Conference on Signals, Systems and Computers\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1993-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 27th Asilomar Conference on Signals, Systems and Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACSSC.1993.342540\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 27th Asilomar Conference on Signals, Systems and Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACSSC.1993.342540","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Constructive proof of efficient pattern storage in the multi-layer perceptron
We show that the pattern storage capability of the Gabor polynomial is much higher than the commonly used lower bound on multi-layer perceptron (MLP) pattern storage. We also show that multi-layer perceptron networks having second and third degree polynomial activations can be constructed which efficiently implement Gabor polynomials and therefore have the same high pattern storage capability. The polynomial networks can be mapped to conventional sigmoidal MLPs having the same efficiency. It is shown that training techniques like output weight optimization and conjugate gradient attain only the lower bound of pattern storage. Certainly they are not the final solutions to the MLP training problem.<>