Theoretical and Empirical Analysis of Parameter Control Mechanisms in the (1 + (λ, λ)) Genetic Algorithm

Mario Alejandro Hevia Fajardo, Dirk Sudholt
{"title":"Theoretical and Empirical Analysis of Parameter Control Mechanisms in the (1 + (λ, λ)) Genetic Algorithm","authors":"Mario Alejandro Hevia Fajardo, Dirk Sudholt","doi":"10.1145/3564755","DOIUrl":null,"url":null,"abstract":"The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably. We study this problem for the standard Jumpk benchmark problem class using runtime analysis. The self-adjusting (1 + (λ, λ)) GA behaves like a (1 + n) EA whenever the maximum value for λ is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that resets are effective for all Jumpk problems: the self-adjusting (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and evolutionary algorithms with heavy-tailed mutation, apart from a small polynomial overhead. Along the way, we present new general methods for translating existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. We also show that the algorithm presents a bimodal parameter landscape with respect to λ on Jumpk. For appropriate n and k, the landscape features a local optimum in a wide basin of attraction and a global optimum in a narrow basin of attraction. To our knowledge this is the first proof of a bimodal parameter landscape for the runtime of an evolutionary algorithm on a multimodal problem.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"86 8","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Evolutionary Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3564755","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably. We study this problem for the standard Jumpk benchmark problem class using runtime analysis. The self-adjusting (1 + (λ, λ)) GA behaves like a (1 + n) EA whenever the maximum value for λ is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that resets are effective for all Jumpk problems: the self-adjusting (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and evolutionary algorithms with heavy-tailed mutation, apart from a small polynomial overhead. Along the way, we present new general methods for translating existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. We also show that the algorithm presents a bimodal parameter landscape with respect to λ on Jumpk. For appropriate n and k, the landscape features a local optimum in a wide basin of attraction and a global optimum in a narrow basin of attraction. To our knowledge this is the first proof of a bimodal parameter landscape for the runtime of an evolutionary algorithm on a multimodal problem.
(1 + (λ, λ))遗传算法参数控制机理的理论与实证分析
自调整(1 + (λ, λ))遗传算法是最著名的遗传算法,用于解决像OneMax这样具有良好适应度-距离相关性的问题。它使用参数控制机制来控制参数λ的突变强度和后代的数量。然而,在多模态问题上,参数控制机制倾向于不可控地增加λ。我们使用运行时分析来研究标准Jumpk基准问题类的这个问题。当λ达到最大值时,自调节(1 + (λ, λ))遗传算法表现为(1 + n) EA。这对于需要大跳跃的问题是无效的。在较小的值上封顶λ有利于解决这类问题。最后,将λ重置为1允许参数在参数空间中循环。我们证明了重置对所有Jumpk问题都是有效的:自调整(1 + (λ, λ))遗传算法的性能与具有最佳突变率的(1 + 1)EA和具有重尾突变的进化算法一样好,除了一个小的多项式开销。在此过程中,我们提出了将现有运行时边界从(1 + 1)EA转换为自调整(1 + (λ, λ)) GA的新通用方法。我们还表明,该算法在Jumpk上呈现出关于λ的双峰参数景观。在适当的n和k下,景观在较宽的吸引力盆地中具有局部最优,在较窄的吸引力盆地中具有全局最优。据我们所知,这是进化算法在多模态问题上运行时的双峰参数景观的第一个证明。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信