{"title":"全局函数优化中的混合遗传算法和模拟退火算法(HGASA)","authors":"Dingjun Chen, Chung-Yeol Lee, C. Park","doi":"10.1109/ICTAI.2005.72","DOIUrl":null,"url":null,"abstract":"We have implemented the sequential HGASA on a Sun Workstation machine; its performance seems to be very good in finding the global optimum of a sample function optimization problem as compared with some sequential optimization algorithms that offer low efficiency and limited reliability. However, the sequential HGASA generally needs a long run time cost. So we implemented a parallel HGASA using message passing interface (MPI) on a high performance computer and performed many tests using a set of frequently used function optimization problems. The performance analysis of this parallel approach has been done on IBM Beowulf PCs cluster in terms of program execution time, relative speed up and efficiency","PeriodicalId":294694,"journal":{"name":"17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'05)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Hybrid genetic algorithm and simulated annealing (HGASA) in global function optimization\",\"authors\":\"Dingjun Chen, Chung-Yeol Lee, C. Park\",\"doi\":\"10.1109/ICTAI.2005.72\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We have implemented the sequential HGASA on a Sun Workstation machine; its performance seems to be very good in finding the global optimum of a sample function optimization problem as compared with some sequential optimization algorithms that offer low efficiency and limited reliability. However, the sequential HGASA generally needs a long run time cost. So we implemented a parallel HGASA using message passing interface (MPI) on a high performance computer and performed many tests using a set of frequently used function optimization problems. The performance analysis of this parallel approach has been done on IBM Beowulf PCs cluster in terms of program execution time, relative speed up and efficiency\",\"PeriodicalId\":294694,\"journal\":{\"name\":\"17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'05)\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-11-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTAI.2005.72\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI.2005.72","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hybrid genetic algorithm and simulated annealing (HGASA) in global function optimization
We have implemented the sequential HGASA on a Sun Workstation machine; its performance seems to be very good in finding the global optimum of a sample function optimization problem as compared with some sequential optimization algorithms that offer low efficiency and limited reliability. However, the sequential HGASA generally needs a long run time cost. So we implemented a parallel HGASA using message passing interface (MPI) on a high performance computer and performed many tests using a set of frequently used function optimization problems. The performance analysis of this parallel approach has been done on IBM Beowulf PCs cluster in terms of program execution time, relative speed up and efficiency