Shiting Wang , Jinhua Zheng , Yingjie Zou , Yuan Liu , Juan Zou , Shengxiang Yang
{"title":"A population hierarchical-based evolutionary algorithm for large-scale many-objective optimization","authors":"Shiting Wang , Jinhua Zheng , Yingjie Zou , Yuan Liu , Juan Zou , Shengxiang Yang","doi":"10.1016/j.swevo.2024.101752","DOIUrl":null,"url":null,"abstract":"<div><div>In large-scale many-objective optimization problems (LMaOPs), the performance of algorithms faces significant challenges as the number of objective functions and decision variables increases. The main challenges in addressing this type of problem are as follows: the large number of decision variables creates an enormous decision space that needs to be explored, leading to slow convergence; and the high-dimensional objective space presents difficulties in selecting dominant individuals within the population. To address this issue, this paper introduces an evolutionary algorithm based on population hierarchy to address LMaOPs. The algorithm employs different strategies for offspring generation at various population levels. Initially, the population is categorized into three levels by fitness value: poorly performing solutions with higher fitness (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>), better solutions with lower fitness (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>l</mi></mrow></msub></math></span>), and excellent individuals stored in the archive set (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span>). Subsequently, a hierarchical knowledge integration strategy (HKI) guides the evolution of individuals at different levels. Individuals in <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>l</mi></mrow></msub></math></span> generate offspring by integrating differential knowledge from <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>, while individuals in <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span> generate offspring by learning prior knowledge from <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span>. Finally, using a cluster-based environment selection strategy balances population diversity and convergence. Extensive experiments on LMaOPs with up to 10 objectives and 5000 decision variables validate the algorithm’s effectiveness, demonstrating superior performance.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101752"},"PeriodicalIF":8.2000,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Swarm and Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2210650224002906","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In large-scale many-objective optimization problems (LMaOPs), the performance of algorithms faces significant challenges as the number of objective functions and decision variables increases. The main challenges in addressing this type of problem are as follows: the large number of decision variables creates an enormous decision space that needs to be explored, leading to slow convergence; and the high-dimensional objective space presents difficulties in selecting dominant individuals within the population. To address this issue, this paper introduces an evolutionary algorithm based on population hierarchy to address LMaOPs. The algorithm employs different strategies for offspring generation at various population levels. Initially, the population is categorized into three levels by fitness value: poorly performing solutions with higher fitness (), better solutions with lower fitness (), and excellent individuals stored in the archive set (). Subsequently, a hierarchical knowledge integration strategy (HKI) guides the evolution of individuals at different levels. Individuals in generate offspring by integrating differential knowledge from and , while individuals in generate offspring by learning prior knowledge from . Finally, using a cluster-based environment selection strategy balances population diversity and convergence. Extensive experiments on LMaOPs with up to 10 objectives and 5000 decision variables validate the algorithm’s effectiveness, demonstrating superior performance.
期刊介绍:
Swarm and Evolutionary Computation is a pioneering peer-reviewed journal focused on the latest research and advancements in nature-inspired intelligent computation using swarm and evolutionary algorithms. It covers theoretical, experimental, and practical aspects of these paradigms and their hybrids, promoting interdisciplinary research. The journal prioritizes the publication of high-quality, original articles that push the boundaries of evolutionary computation and swarm intelligence. Additionally, it welcomes survey papers on current topics and novel applications. Topics of interest include but are not limited to: Genetic Algorithms, and Genetic Programming, Evolution Strategies, and Evolutionary Programming, Differential Evolution, Artificial Immune Systems, Particle Swarms, Ant Colony, Bacterial Foraging, Artificial Bees, Fireflies Algorithm, Harmony Search, Artificial Life, Digital Organisms, Estimation of Distribution Algorithms, Stochastic Diffusion Search, Quantum Computing, Nano Computing, Membrane Computing, Human-centric Computing, Hybridization of Algorithms, Memetic Computing, Autonomic Computing, Self-organizing systems, Combinatorial, Discrete, Binary, Constrained, Multi-objective, Multi-modal, Dynamic, and Large-scale Optimization.