{"title":"确定性遗传算法:实现细节和一些结果","authors":"R. Salomon","doi":"10.1109/CEC.1999.782001","DOIUrl":null,"url":null,"abstract":"Recent literature on genetic algorithms provides a controversial discussion on the efficiency of this particular class of randomized optimization procedures; despite several encouraging empirical results, recent theoretical analyses have argued that in most cases, the runtime behavior of genetic algorithms is increased by at least a factor of ln(n) with n denoting the number of parameters to be optimized. It has been argued that these inefficiencies are due to intrinsic resampling effects. As a result of these theoretical considerations, a deterministic genetic algorithm has been suggested as a theoretical concept. Since its proposition, informal discussions have been raised concerning some implementation details as well as efficacy issues. Since some implementation details are a bit tricky, this paper discusses some of them in a pseudo programming language similar to Pascal or C. In addition, this paper presents two possible variants in detail and compares their runtime behavior with another fairly established procedure, the breeder genetic algorithm. It turns out that on widely-used test functions, the deterministic variants scale strictly better. Furthermore, this paper discusses some specific fitness functions on which random algorithms yield better worst-ease expectations than deterministic algorithms; but both types require constant time on average, i.e., one function evaluation.","PeriodicalId":292523,"journal":{"name":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"The deterministic genetic algorithm: implementation details and some results\",\"authors\":\"R. Salomon\",\"doi\":\"10.1109/CEC.1999.782001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent literature on genetic algorithms provides a controversial discussion on the efficiency of this particular class of randomized optimization procedures; despite several encouraging empirical results, recent theoretical analyses have argued that in most cases, the runtime behavior of genetic algorithms is increased by at least a factor of ln(n) with n denoting the number of parameters to be optimized. It has been argued that these inefficiencies are due to intrinsic resampling effects. As a result of these theoretical considerations, a deterministic genetic algorithm has been suggested as a theoretical concept. Since its proposition, informal discussions have been raised concerning some implementation details as well as efficacy issues. Since some implementation details are a bit tricky, this paper discusses some of them in a pseudo programming language similar to Pascal or C. In addition, this paper presents two possible variants in detail and compares their runtime behavior with another fairly established procedure, the breeder genetic algorithm. It turns out that on widely-used test functions, the deterministic variants scale strictly better. Furthermore, this paper discusses some specific fitness functions on which random algorithms yield better worst-ease expectations than deterministic algorithms; but both types require constant time on average, i.e., one function evaluation.\",\"PeriodicalId\":292523,\"journal\":{\"name\":\"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-07-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC.1999.782001\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.1999.782001","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The deterministic genetic algorithm: implementation details and some results
Recent literature on genetic algorithms provides a controversial discussion on the efficiency of this particular class of randomized optimization procedures; despite several encouraging empirical results, recent theoretical analyses have argued that in most cases, the runtime behavior of genetic algorithms is increased by at least a factor of ln(n) with n denoting the number of parameters to be optimized. It has been argued that these inefficiencies are due to intrinsic resampling effects. As a result of these theoretical considerations, a deterministic genetic algorithm has been suggested as a theoretical concept. Since its proposition, informal discussions have been raised concerning some implementation details as well as efficacy issues. Since some implementation details are a bit tricky, this paper discusses some of them in a pseudo programming language similar to Pascal or C. In addition, this paper presents two possible variants in detail and compares their runtime behavior with another fairly established procedure, the breeder genetic algorithm. It turns out that on widely-used test functions, the deterministic variants scale strictly better. Furthermore, this paper discusses some specific fitness functions on which random algorithms yield better worst-ease expectations than deterministic algorithms; but both types require constant time on average, i.e., one function evaluation.