{"title":"基于遗传算法的反向传播神经网络训练集并行性模式分配方案","authors":"S. K. Foo, P. Saratchandran, N. Sundararajan","doi":"10.1109/ICEC.1995.487442","DOIUrl":null,"url":null,"abstract":"Training set parallelization is an efficient method to optimize the training procedure performance of the backpropagation neural network algorithm. In training set parallelism, the training patterns are distributed 'optimally' among a heterogeneous array of processors, optimality criterion obtain the minimum training time per epoch. Earlier studies on heterogeneous transputers connected in a pipeline-ring topology have indicated that the above optimization problem results in a mixed integer programming problem and results in large computation time to find the optimal pattern allocations. In this paper, a genetic algorithm is used as an optimization tool to find the optimal allocation of patterns. The approach is illustrated using two benchmark problems, the 256-8-256 Encoder and NETTALK problems. Results indicate that when 'a priori' information is not used, the computation time needed by the genetic algorithm is comparable to that obtained by mixed integer programming. However, when 'a priori' information is used, the genetic algorithm results in significant reduction in computation time for finding the optimal solution.","PeriodicalId":213919,"journal":{"name":"Proceedings of 1995 IEEE International Conference on Evolutionary Computation","volume":"44 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Genetic algorithm based pattern allocation schemes for training set parallelism in backpropagation neural networks\",\"authors\":\"S. K. Foo, P. Saratchandran, N. Sundararajan\",\"doi\":\"10.1109/ICEC.1995.487442\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Training set parallelization is an efficient method to optimize the training procedure performance of the backpropagation neural network algorithm. In training set parallelism, the training patterns are distributed 'optimally' among a heterogeneous array of processors, optimality criterion obtain the minimum training time per epoch. Earlier studies on heterogeneous transputers connected in a pipeline-ring topology have indicated that the above optimization problem results in a mixed integer programming problem and results in large computation time to find the optimal pattern allocations. In this paper, a genetic algorithm is used as an optimization tool to find the optimal allocation of patterns. The approach is illustrated using two benchmark problems, the 256-8-256 Encoder and NETTALK problems. Results indicate that when 'a priori' information is not used, the computation time needed by the genetic algorithm is comparable to that obtained by mixed integer programming. However, when 'a priori' information is used, the genetic algorithm results in significant reduction in computation time for finding the optimal solution.\",\"PeriodicalId\":213919,\"journal\":{\"name\":\"Proceedings of 1995 IEEE International Conference on Evolutionary Computation\",\"volume\":\"44 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1995-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of 1995 IEEE International Conference on Evolutionary Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICEC.1995.487442\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1995 IEEE International Conference on Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEC.1995.487442","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Genetic algorithm based pattern allocation schemes for training set parallelism in backpropagation neural networks
Training set parallelization is an efficient method to optimize the training procedure performance of the backpropagation neural network algorithm. In training set parallelism, the training patterns are distributed 'optimally' among a heterogeneous array of processors, optimality criterion obtain the minimum training time per epoch. Earlier studies on heterogeneous transputers connected in a pipeline-ring topology have indicated that the above optimization problem results in a mixed integer programming problem and results in large computation time to find the optimal pattern allocations. In this paper, a genetic algorithm is used as an optimization tool to find the optimal allocation of patterns. The approach is illustrated using two benchmark problems, the 256-8-256 Encoder and NETTALK problems. Results indicate that when 'a priori' information is not used, the computation time needed by the genetic algorithm is comparable to that obtained by mixed integer programming. However, when 'a priori' information is used, the genetic algorithm results in significant reduction in computation time for finding the optimal solution.