Xiaotong Liu , Tianlei Wang , Zhiqiang Zeng , Ye Tian , Jun Tong
{"title":"Three stage based reinforcement learning for combining multiple metaheuristic algorithms","authors":"Xiaotong Liu , Tianlei Wang , Zhiqiang Zeng , Ye Tian , Jun Tong","doi":"10.1016/j.swevo.2025.101935","DOIUrl":null,"url":null,"abstract":"<div><div>Combined of metaheuristic algorithms can effectively improve the performance of algorithms by utilizing the characteristics of different metaheuristic algorithms, and the key is how to combine multiple metaheuristic algorithms. Reinforcement learning is one of the effective methods for combining multiple metaheuristic algorithms. However, designing a competitive reinforcement learning approach to achieve efficient collaboration among metaheuristic algorithms is a highly challenging task. Therefore, this study proposes a three-stage reinforcement learning for combining multiple metaheuristic algorithms (TSRL-CMM). TSRL-CMM is divided into three stages: the exploration stage, the stage with both exploration and exploitation, and the exploitation stage. On this basis, an adaptive action selection strategy and a reward function are designed. The proposed action selection strategy can adaptively select appropriate metaheuristic algorithms based on the state of the population, achieving a balance between exploration and exploitation. The proposed reward function can effectively guide the population to transition to the expected state based on the iteration stage and state transitions. To verify the effectiveness of TSRL-CMM, we evaluated it using the CEC2017 test suite, nine real-world engineering design problems and six power system optimization problems. TSRL-CMM was compared with 10 state-of-the-art metaheuristic algorithms, and experimental results showed that TSRL-CMM performed better than the compared algorithms in both artificial and real-world problems. Furthermore, TSRL-CMM was specifically compared with three CEC winner algorithms on the CEC 2017 benchmark test suite. The experimental results show that the proposed algorithm is highly competitive. The source code can be obtained from <span><span>https://github.com/xtongliu/TSRL-CMM-code</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"95 ","pages":"Article 101935"},"PeriodicalIF":8.2000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Swarm and Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2210650225000938","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Combined of metaheuristic algorithms can effectively improve the performance of algorithms by utilizing the characteristics of different metaheuristic algorithms, and the key is how to combine multiple metaheuristic algorithms. Reinforcement learning is one of the effective methods for combining multiple metaheuristic algorithms. However, designing a competitive reinforcement learning approach to achieve efficient collaboration among metaheuristic algorithms is a highly challenging task. Therefore, this study proposes a three-stage reinforcement learning for combining multiple metaheuristic algorithms (TSRL-CMM). TSRL-CMM is divided into three stages: the exploration stage, the stage with both exploration and exploitation, and the exploitation stage. On this basis, an adaptive action selection strategy and a reward function are designed. The proposed action selection strategy can adaptively select appropriate metaheuristic algorithms based on the state of the population, achieving a balance between exploration and exploitation. The proposed reward function can effectively guide the population to transition to the expected state based on the iteration stage and state transitions. To verify the effectiveness of TSRL-CMM, we evaluated it using the CEC2017 test suite, nine real-world engineering design problems and six power system optimization problems. TSRL-CMM was compared with 10 state-of-the-art metaheuristic algorithms, and experimental results showed that TSRL-CMM performed better than the compared algorithms in both artificial and real-world problems. Furthermore, TSRL-CMM was specifically compared with three CEC winner algorithms on the CEC 2017 benchmark test suite. The experimental results show that the proposed algorithm is highly competitive. The source code can be obtained from https://github.com/xtongliu/TSRL-CMM-code.
期刊介绍:
Swarm and Evolutionary Computation is a pioneering peer-reviewed journal focused on the latest research and advancements in nature-inspired intelligent computation using swarm and evolutionary algorithms. It covers theoretical, experimental, and practical aspects of these paradigms and their hybrids, promoting interdisciplinary research. The journal prioritizes the publication of high-quality, original articles that push the boundaries of evolutionary computation and swarm intelligence. Additionally, it welcomes survey papers on current topics and novel applications. Topics of interest include but are not limited to: Genetic Algorithms, and Genetic Programming, Evolution Strategies, and Evolutionary Programming, Differential Evolution, Artificial Immune Systems, Particle Swarms, Ant Colony, Bacterial Foraging, Artificial Bees, Fireflies Algorithm, Harmony Search, Artificial Life, Digital Organisms, Estimation of Distribution Algorithms, Stochastic Diffusion Search, Quantum Computing, Nano Computing, Membrane Computing, Human-centric Computing, Hybridization of Algorithms, Memetic Computing, Autonomic Computing, Self-organizing systems, Combinatorial, Discrete, Binary, Constrained, Multi-objective, Multi-modal, Dynamic, and Large-scale Optimization.