{"title":"enzo - ii -一个强大的设计工具,发展多层前馈网络","authors":"H. Braun, Peter Zagorski","doi":"10.1109/ICEC.1994.349939","DOIUrl":null,"url":null,"abstract":"ENZO-II combines two successful search techniques: gradient descent for an efficient local weight optimization and evolution for a global topology optimization. By using these, it takes full advantage of the efficiently computable gradient information without being trapped by local minima. Through knowledge transfer by inheriting parental weights, learning is speeded up by 1-2 orders of magnitude, and the expected fitness of the offspring is far above the average for this network topology. Moreover, ENZO-II impressively thins out the topology by the cooperation between a discrete mutation operator and a continuous weight decay method. Especially, ENZO-II also tries to cut off the connections to possibly redundant input units. Therefore, ENZO-II not only supports the user in the network design but also recognizes redundant input units.<<ETX>>","PeriodicalId":393865,"journal":{"name":"Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"ENZO-II-a powerful design tool to evolve multilayer feed forward networks\",\"authors\":\"H. Braun, Peter Zagorski\",\"doi\":\"10.1109/ICEC.1994.349939\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ENZO-II combines two successful search techniques: gradient descent for an efficient local weight optimization and evolution for a global topology optimization. By using these, it takes full advantage of the efficiently computable gradient information without being trapped by local minima. Through knowledge transfer by inheriting parental weights, learning is speeded up by 1-2 orders of magnitude, and the expected fitness of the offspring is far above the average for this network topology. Moreover, ENZO-II impressively thins out the topology by the cooperation between a discrete mutation operator and a continuous weight decay method. Especially, ENZO-II also tries to cut off the connections to possibly redundant input units. Therefore, ENZO-II not only supports the user in the network design but also recognizes redundant input units.<<ETX>>\",\"PeriodicalId\":393865,\"journal\":{\"name\":\"Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICEC.1994.349939\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEC.1994.349939","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ENZO-II-a powerful design tool to evolve multilayer feed forward networks
ENZO-II combines two successful search techniques: gradient descent for an efficient local weight optimization and evolution for a global topology optimization. By using these, it takes full advantage of the efficiently computable gradient information without being trapped by local minima. Through knowledge transfer by inheriting parental weights, learning is speeded up by 1-2 orders of magnitude, and the expected fitness of the offspring is far above the average for this network topology. Moreover, ENZO-II impressively thins out the topology by the cooperation between a discrete mutation operator and a continuous weight decay method. Especially, ENZO-II also tries to cut off the connections to possibly redundant input units. Therefore, ENZO-II not only supports the user in the network design but also recognizes redundant input units.<>