Knowledge-based hyper-parameter adaptation of multi-stage differential evolution by deep reinforcement learning

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Mingzhang Han, Mingjie Fan, Xinchao Zhao, Lingjuan Ye
{"title":"Knowledge-based hyper-parameter adaptation of multi-stage differential evolution by deep reinforcement learning","authors":"Mingzhang Han,&nbsp;Mingjie Fan,&nbsp;Xinchao Zhao,&nbsp;Lingjuan Ye","doi":"10.1016/j.neucom.2025.130633","DOIUrl":null,"url":null,"abstract":"<div><div>Differential evolution (DE) is a prominent algorithm in evolutionary computation, with adaptive control mechanisms for its operators and parameters being a critical research focus due to their impact on performance. Existing studies often rely on trial-and-error methods or deep reinforcement learning (DRL) for per-generation adaptive control, yet they inadequately explore adaptive hyper-parameter tuning across different stages of the evolution process. To address this limitation, this paper presents a knowledge-based framework named DRL-HP-* for multi-stage DE hyper-parameter adaptation using DRL. The framework divides the algorithm’s search procedure into multiple equal stages, where a DRL agent determines hyper-parameters in each stage based on five types of states that characterize the evolutionary process. A novel reward function is designed to comprehensively train the agent across all training functions, integrating the performance of the backbone algorithm. This approach results in the development of three new algorithms (DRL-HP-jSO, DRL-HP-LSHADE-RSP, and DRL-HP-EjSO). Experimental evaluations on the CEC’18 benchmark suite demonstrate that the proposed algorithms outperform eight state-of-the-art methods, demonstrating superior optimization performance. Further extensive experiments validate the effectiveness of the designed reward function and the framework’s scalability and robustness, highlighting its contribution to enabling stage-wise adaptive hyper-parameter control.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130633"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225013050","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Differential evolution (DE) is a prominent algorithm in evolutionary computation, with adaptive control mechanisms for its operators and parameters being a critical research focus due to their impact on performance. Existing studies often rely on trial-and-error methods or deep reinforcement learning (DRL) for per-generation adaptive control, yet they inadequately explore adaptive hyper-parameter tuning across different stages of the evolution process. To address this limitation, this paper presents a knowledge-based framework named DRL-HP-* for multi-stage DE hyper-parameter adaptation using DRL. The framework divides the algorithm’s search procedure into multiple equal stages, where a DRL agent determines hyper-parameters in each stage based on five types of states that characterize the evolutionary process. A novel reward function is designed to comprehensively train the agent across all training functions, integrating the performance of the backbone algorithm. This approach results in the development of three new algorithms (DRL-HP-jSO, DRL-HP-LSHADE-RSP, and DRL-HP-EjSO). Experimental evaluations on the CEC’18 benchmark suite demonstrate that the proposed algorithms outperform eight state-of-the-art methods, demonstrating superior optimization performance. Further extensive experiments validate the effectiveness of the designed reward function and the framework’s scalability and robustness, highlighting its contribution to enabling stage-wise adaptive hyper-parameter control.
基于深度强化学习的多阶段差分进化的知识超参数自适应
差分进化算法是进化计算中的一种重要算法,其算子和参数的自适应控制机制对其性能的影响一直是研究的热点。现有的研究通常依赖于试错法或深度强化学习(DRL)进行每代自适应控制,但它们没有充分探索进化过程中不同阶段的自适应超参数调整。为了解决这一限制,本文提出了一个基于知识的框架,名为DRL- hp -*,用于使用DRL进行多阶段DE超参数自适应。该框架将算法的搜索过程划分为多个相等的阶段,其中DRL代理根据表征进化过程的五种状态类型确定每个阶段的超参数。设计了一种新颖的奖励函数,集成了主干算法的性能,跨所有训练函数对智能体进行综合训练。这种方法导致了三种新算法(DRL-HP-jSO、DRL-HP-LSHADE-RSP和DRL-HP-EjSO)的发展。在CEC’18基准套件上的实验评估表明,所提出的算法优于八种最先进的方法,展示了卓越的优化性能。进一步的广泛实验验证了所设计的奖励函数的有效性以及框架的可扩展性和鲁棒性,突出了其对实现阶段自适应超参数控制的贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信