Mingzhang Han, Mingjie Fan, Xinchao Zhao, Lingjuan Ye
{"title":"Knowledge-based hyper-parameter adaptation of multi-stage differential evolution by deep reinforcement learning","authors":"Mingzhang Han, Mingjie Fan, Xinchao Zhao, Lingjuan Ye","doi":"10.1016/j.neucom.2025.130633","DOIUrl":null,"url":null,"abstract":"<div><div>Differential evolution (DE) is a prominent algorithm in evolutionary computation, with adaptive control mechanisms for its operators and parameters being a critical research focus due to their impact on performance. Existing studies often rely on trial-and-error methods or deep reinforcement learning (DRL) for per-generation adaptive control, yet they inadequately explore adaptive hyper-parameter tuning across different stages of the evolution process. To address this limitation, this paper presents a knowledge-based framework named DRL-HP-* for multi-stage DE hyper-parameter adaptation using DRL. The framework divides the algorithm’s search procedure into multiple equal stages, where a DRL agent determines hyper-parameters in each stage based on five types of states that characterize the evolutionary process. A novel reward function is designed to comprehensively train the agent across all training functions, integrating the performance of the backbone algorithm. This approach results in the development of three new algorithms (DRL-HP-jSO, DRL-HP-LSHADE-RSP, and DRL-HP-EjSO). Experimental evaluations on the CEC’18 benchmark suite demonstrate that the proposed algorithms outperform eight state-of-the-art methods, demonstrating superior optimization performance. Further extensive experiments validate the effectiveness of the designed reward function and the framework’s scalability and robustness, highlighting its contribution to enabling stage-wise adaptive hyper-parameter control.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"648 ","pages":"Article 130633"},"PeriodicalIF":5.5000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225013050","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Differential evolution (DE) is a prominent algorithm in evolutionary computation, with adaptive control mechanisms for its operators and parameters being a critical research focus due to their impact on performance. Existing studies often rely on trial-and-error methods or deep reinforcement learning (DRL) for per-generation adaptive control, yet they inadequately explore adaptive hyper-parameter tuning across different stages of the evolution process. To address this limitation, this paper presents a knowledge-based framework named DRL-HP-* for multi-stage DE hyper-parameter adaptation using DRL. The framework divides the algorithm’s search procedure into multiple equal stages, where a DRL agent determines hyper-parameters in each stage based on five types of states that characterize the evolutionary process. A novel reward function is designed to comprehensively train the agent across all training functions, integrating the performance of the backbone algorithm. This approach results in the development of three new algorithms (DRL-HP-jSO, DRL-HP-LSHADE-RSP, and DRL-HP-EjSO). Experimental evaluations on the CEC’18 benchmark suite demonstrate that the proposed algorithms outperform eight state-of-the-art methods, demonstrating superior optimization performance. Further extensive experiments validate the effectiveness of the designed reward function and the framework’s scalability and robustness, highlighting its contribution to enabling stage-wise adaptive hyper-parameter control.
差分进化算法是进化计算中的一种重要算法,其算子和参数的自适应控制机制对其性能的影响一直是研究的热点。现有的研究通常依赖于试错法或深度强化学习(DRL)进行每代自适应控制,但它们没有充分探索进化过程中不同阶段的自适应超参数调整。为了解决这一限制,本文提出了一个基于知识的框架,名为DRL- hp -*,用于使用DRL进行多阶段DE超参数自适应。该框架将算法的搜索过程划分为多个相等的阶段,其中DRL代理根据表征进化过程的五种状态类型确定每个阶段的超参数。设计了一种新颖的奖励函数,集成了主干算法的性能,跨所有训练函数对智能体进行综合训练。这种方法导致了三种新算法(DRL-HP-jSO、DRL-HP-LSHADE-RSP和DRL-HP-EjSO)的发展。在CEC’18基准套件上的实验评估表明,所提出的算法优于八种最先进的方法,展示了卓越的优化性能。进一步的广泛实验验证了所设计的奖励函数的有效性以及框架的可扩展性和鲁棒性,突出了其对实现阶段自适应超参数控制的贡献。
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.