一种并行差分进化算法

W. Kwedlo, K. Bandurski
{"title":"一种并行差分进化算法","authors":"W. Kwedlo, K. Bandurski","doi":"10.1109/PARELEC.2006.6","DOIUrl":null,"url":null,"abstract":"In the paper the problem of using a differential evolution algorithm for feed-forward neural network training is considered. A new parallelization scheme for the computation of the fitness function is proposed. This scheme is based on data decomposition. Both the learning set and the population of the evolutionary algorithm are distributed among processors. The processors form a pipeline using the ring topology. In a single step each processor computes the local fitness of its current subpopulation while sending the previous subpopulation to the successor and receiving next sub-population from the predecessor. Thus it is possible to overlap communication and computation using non-blocking MPI routines. Our approach was applied to several classification and regression learning problems. The scalability of the algorithm was measured on a compute cluster consisting of sixteen two-processor servers connected by a fast infiniband interconnect. The results of initial experiments show that for large datasets the algorithm is capable of obtaining very good, near linear speedup","PeriodicalId":186915,"journal":{"name":"International Conference on Parallel Computing in Electrical Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"30","resultStr":"{\"title\":\"A Parallel Differential Evolution Algorithm A Parallel Differential Evolution Algorithm\",\"authors\":\"W. Kwedlo, K. Bandurski\",\"doi\":\"10.1109/PARELEC.2006.6\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the paper the problem of using a differential evolution algorithm for feed-forward neural network training is considered. A new parallelization scheme for the computation of the fitness function is proposed. This scheme is based on data decomposition. Both the learning set and the population of the evolutionary algorithm are distributed among processors. The processors form a pipeline using the ring topology. In a single step each processor computes the local fitness of its current subpopulation while sending the previous subpopulation to the successor and receiving next sub-population from the predecessor. Thus it is possible to overlap communication and computation using non-blocking MPI routines. Our approach was applied to several classification and regression learning problems. The scalability of the algorithm was measured on a compute cluster consisting of sixteen two-processor servers connected by a fast infiniband interconnect. The results of initial experiments show that for large datasets the algorithm is capable of obtaining very good, near linear speedup\",\"PeriodicalId\":186915,\"journal\":{\"name\":\"International Conference on Parallel Computing in Electrical Engineering\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"30\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Conference on Parallel Computing in Electrical Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PARELEC.2006.6\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Parallel Computing in Electrical Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PARELEC.2006.6","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 30

摘要

本文研究了用差分进化算法进行前馈神经网络训练的问题。提出了一种新的适合度函数计算的并行化方案。该方案基于数据分解。进化算法的学习集和总体分布在多个处理器之间。处理器使用环形拓扑形成管道。在单个步骤中,每个处理器计算其当前子种群的局部适应度,同时将前一个子种群发送给后继子种群并从前一个子种群接收下一个子种群。因此,可以使用非阻塞MPI例程来重叠通信和计算。我们的方法被应用于几个分类和回归学习问题。该算法的可扩展性在一个由16个双处理器服务器组成的计算集群上进行了测量,这些服务器通过高速ib互连连接。初步实验结果表明,对于大型数据集,该算法能够获得非常好的近似线性的加速
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Parallel Differential Evolution Algorithm A Parallel Differential Evolution Algorithm
In the paper the problem of using a differential evolution algorithm for feed-forward neural network training is considered. A new parallelization scheme for the computation of the fitness function is proposed. This scheme is based on data decomposition. Both the learning set and the population of the evolutionary algorithm are distributed among processors. The processors form a pipeline using the ring topology. In a single step each processor computes the local fitness of its current subpopulation while sending the previous subpopulation to the successor and receiving next sub-population from the predecessor. Thus it is possible to overlap communication and computation using non-blocking MPI routines. Our approach was applied to several classification and regression learning problems. The scalability of the algorithm was measured on a compute cluster consisting of sixteen two-processor servers connected by a fast infiniband interconnect. The results of initial experiments show that for large datasets the algorithm is capable of obtaining very good, near linear speedup
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信