{"title":"对Alopex训练算法可能改进的研究","authors":"A. Bia","doi":"10.1109/SBRN.2000.889726","DOIUrl":null,"url":null,"abstract":"We studied the performance of the Alopex algorithm, and proposed modifications that improve the training time, and simplified the algorithm. We tested different variations of the algorithm. We describe the best cases and summarize the conclusions we arrived at. One of the proposed variations (99/B) performs slightly faster than the Alopex algorithm described by Unnikrishnan et al. (1994), showing less unsuccessful training attempts, while being simpler to implement. Like Alopex, our versions are based on local correlations between changes in individual weights and changes in the global error measure. Our algorithm is also stochastic, but it differs from Alopex in the fact that no annealing scheme is applied during the training process and hence it uses less parameters.","PeriodicalId":448461,"journal":{"name":"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks","volume":"217 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"A study of possible improvements to the Alopex training algorithm\",\"authors\":\"A. Bia\",\"doi\":\"10.1109/SBRN.2000.889726\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We studied the performance of the Alopex algorithm, and proposed modifications that improve the training time, and simplified the algorithm. We tested different variations of the algorithm. We describe the best cases and summarize the conclusions we arrived at. One of the proposed variations (99/B) performs slightly faster than the Alopex algorithm described by Unnikrishnan et al. (1994), showing less unsuccessful training attempts, while being simpler to implement. Like Alopex, our versions are based on local correlations between changes in individual weights and changes in the global error measure. Our algorithm is also stochastic, but it differs from Alopex in the fact that no annealing scheme is applied during the training process and hence it uses less parameters.\",\"PeriodicalId\":448461,\"journal\":{\"name\":\"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks\",\"volume\":\"217 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SBRN.2000.889726\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBRN.2000.889726","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A study of possible improvements to the Alopex training algorithm
We studied the performance of the Alopex algorithm, and proposed modifications that improve the training time, and simplified the algorithm. We tested different variations of the algorithm. We describe the best cases and summarize the conclusions we arrived at. One of the proposed variations (99/B) performs slightly faster than the Alopex algorithm described by Unnikrishnan et al. (1994), showing less unsuccessful training attempts, while being simpler to implement. Like Alopex, our versions are based on local correlations between changes in individual weights and changes in the global error measure. Our algorithm is also stochastic, but it differs from Alopex in the fact that no annealing scheme is applied during the training process and hence it uses less parameters.