{"title":"一种多层感知器在稀疏训练数据条件下提高泛化能力的混合学习算法","authors":"M. Tonomura, K. Nakayama","doi":"10.1109/IJCNN.2001.939491","DOIUrl":null,"url":null,"abstract":"The backpropagation algorithm is mainly used for multilayer perceptrons. This algorithm is, however, difficult to achieve high generalization when the number of training data is limited, i.e. sparse training data. In this paper, a new learning algorithm is proposed. It combines the BP algorithm and modifies hyperplanes taking internal information into account. In other words, the hyperplanes are controlled by the distance between the hyperplanes and the critical training data, which locate close to the boundary. This algorithm works well for the sparse training data to achieve high generalization. In order to evaluate generalization, it is assumed that all data are normally distributed around the training data. Several simulations of pattern classification demonstrate the efficiency of the proposed algorithm.","PeriodicalId":346955,"journal":{"name":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"A hybrid learning algorithm for multilayer perceptrons to improve generalization under sparse training data conditions\",\"authors\":\"M. Tonomura, K. Nakayama\",\"doi\":\"10.1109/IJCNN.2001.939491\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The backpropagation algorithm is mainly used for multilayer perceptrons. This algorithm is, however, difficult to achieve high generalization when the number of training data is limited, i.e. sparse training data. In this paper, a new learning algorithm is proposed. It combines the BP algorithm and modifies hyperplanes taking internal information into account. In other words, the hyperplanes are controlled by the distance between the hyperplanes and the critical training data, which locate close to the boundary. This algorithm works well for the sparse training data to achieve high generalization. In order to evaluate generalization, it is assumed that all data are normally distributed around the training data. Several simulations of pattern classification demonstrate the efficiency of the proposed algorithm.\",\"PeriodicalId\":346955,\"journal\":{\"name\":\"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2001.939491\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2001.939491","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A hybrid learning algorithm for multilayer perceptrons to improve generalization under sparse training data conditions
The backpropagation algorithm is mainly used for multilayer perceptrons. This algorithm is, however, difficult to achieve high generalization when the number of training data is limited, i.e. sparse training data. In this paper, a new learning algorithm is proposed. It combines the BP algorithm and modifies hyperplanes taking internal information into account. In other words, the hyperplanes are controlled by the distance between the hyperplanes and the critical training data, which locate close to the boundary. This algorithm works well for the sparse training data to achieve high generalization. In order to evaluate generalization, it is assumed that all data are normally distributed around the training data. Several simulations of pattern classification demonstrate the efficiency of the proposed algorithm.