Shiting Wang , Jinhua Zheng , Yingjie Zou , Yuan Liu , Juan Zou , Shengxiang Yang
{"title":"基于群体分层的大规模多目标优化进化算法","authors":"Shiting Wang , Jinhua Zheng , Yingjie Zou , Yuan Liu , Juan Zou , Shengxiang Yang","doi":"10.1016/j.swevo.2024.101752","DOIUrl":null,"url":null,"abstract":"<div><div>In large-scale many-objective optimization problems (LMaOPs), the performance of algorithms faces significant challenges as the number of objective functions and decision variables increases. The main challenges in addressing this type of problem are as follows: the large number of decision variables creates an enormous decision space that needs to be explored, leading to slow convergence; and the high-dimensional objective space presents difficulties in selecting dominant individuals within the population. To address this issue, this paper introduces an evolutionary algorithm based on population hierarchy to address LMaOPs. The algorithm employs different strategies for offspring generation at various population levels. Initially, the population is categorized into three levels by fitness value: poorly performing solutions with higher fitness (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>), better solutions with lower fitness (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>l</mi></mrow></msub></math></span>), and excellent individuals stored in the archive set (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span>). Subsequently, a hierarchical knowledge integration strategy (HKI) guides the evolution of individuals at different levels. Individuals in <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>l</mi></mrow></msub></math></span> generate offspring by integrating differential knowledge from <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>, while individuals in <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span> generate offspring by learning prior knowledge from <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span>. Finally, using a cluster-based environment selection strategy balances population diversity and convergence. Extensive experiments on LMaOPs with up to 10 objectives and 5000 decision variables validate the algorithm’s effectiveness, demonstrating superior performance.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101752"},"PeriodicalIF":8.2000,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A population hierarchical-based evolutionary algorithm for large-scale many-objective optimization\",\"authors\":\"Shiting Wang , Jinhua Zheng , Yingjie Zou , Yuan Liu , Juan Zou , Shengxiang Yang\",\"doi\":\"10.1016/j.swevo.2024.101752\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In large-scale many-objective optimization problems (LMaOPs), the performance of algorithms faces significant challenges as the number of objective functions and decision variables increases. The main challenges in addressing this type of problem are as follows: the large number of decision variables creates an enormous decision space that needs to be explored, leading to slow convergence; and the high-dimensional objective space presents difficulties in selecting dominant individuals within the population. To address this issue, this paper introduces an evolutionary algorithm based on population hierarchy to address LMaOPs. The algorithm employs different strategies for offspring generation at various population levels. Initially, the population is categorized into three levels by fitness value: poorly performing solutions with higher fitness (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>), better solutions with lower fitness (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>l</mi></mrow></msub></math></span>), and excellent individuals stored in the archive set (<span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span>). Subsequently, a hierarchical knowledge integration strategy (HKI) guides the evolution of individuals at different levels. Individuals in <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>l</mi></mrow></msub></math></span> generate offspring by integrating differential knowledge from <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>, while individuals in <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span> generate offspring by learning prior knowledge from <span><math><msub><mrow><mi>P</mi></mrow><mrow><mi>a</mi></mrow></msub></math></span>. Finally, using a cluster-based environment selection strategy balances population diversity and convergence. Extensive experiments on LMaOPs with up to 10 objectives and 5000 decision variables validate the algorithm’s effectiveness, demonstrating superior performance.</div></div>\",\"PeriodicalId\":48682,\"journal\":{\"name\":\"Swarm and Evolutionary Computation\",\"volume\":\"91 \",\"pages\":\"Article 101752\"},\"PeriodicalIF\":8.2000,\"publicationDate\":\"2024-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Swarm and Evolutionary Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2210650224002906\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Swarm and Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2210650224002906","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
在大规模多目标优化问题(LMaOPs)中,随着目标函数和决策变量数量的增加,算法的性能面临着巨大挑战。解决这类问题的主要挑战如下:大量决策变量产生了一个需要探索的巨大决策空间,导致收敛速度缓慢;高维目标空间给在种群中选择优势个体带来了困难。为解决这一问题,本文介绍了一种基于种群层次结构的进化算法,以解决 LMaOPs 问题。该算法在不同种群层次上采用不同的子代生成策略。起初,种群按适应度值分为三个等级:适应度较高的表现不佳的解决方案(Ph)、适应度较低的较好的解决方案(Pl)以及存储在档案集中的优秀个体(Pa)。随后,分层知识整合策略(HKI)会引导不同层次的个体进化。Pl 中的个体通过整合来自 Pa 和 Ph 的不同知识产生后代,而 Ph 中的个体则通过学习来自 Pa 的先验知识产生后代。最后,使用基于集群的环境选择策略,平衡了种群多样性和趋同性。在多达 10 个目标和 5000 个决策变量的 LMaOPs 上进行的大量实验验证了该算法的有效性,证明其性能优越。
A population hierarchical-based evolutionary algorithm for large-scale many-objective optimization
In large-scale many-objective optimization problems (LMaOPs), the performance of algorithms faces significant challenges as the number of objective functions and decision variables increases. The main challenges in addressing this type of problem are as follows: the large number of decision variables creates an enormous decision space that needs to be explored, leading to slow convergence; and the high-dimensional objective space presents difficulties in selecting dominant individuals within the population. To address this issue, this paper introduces an evolutionary algorithm based on population hierarchy to address LMaOPs. The algorithm employs different strategies for offspring generation at various population levels. Initially, the population is categorized into three levels by fitness value: poorly performing solutions with higher fitness (), better solutions with lower fitness (), and excellent individuals stored in the archive set (). Subsequently, a hierarchical knowledge integration strategy (HKI) guides the evolution of individuals at different levels. Individuals in generate offspring by integrating differential knowledge from and , while individuals in generate offspring by learning prior knowledge from . Finally, using a cluster-based environment selection strategy balances population diversity and convergence. Extensive experiments on LMaOPs with up to 10 objectives and 5000 decision variables validate the algorithm’s effectiveness, demonstrating superior performance.
期刊介绍:
Swarm and Evolutionary Computation is a pioneering peer-reviewed journal focused on the latest research and advancements in nature-inspired intelligent computation using swarm and evolutionary algorithms. It covers theoretical, experimental, and practical aspects of these paradigms and their hybrids, promoting interdisciplinary research. The journal prioritizes the publication of high-quality, original articles that push the boundaries of evolutionary computation and swarm intelligence. Additionally, it welcomes survey papers on current topics and novel applications. Topics of interest include but are not limited to: Genetic Algorithms, and Genetic Programming, Evolution Strategies, and Evolutionary Programming, Differential Evolution, Artificial Immune Systems, Particle Swarms, Ant Colony, Bacterial Foraging, Artificial Bees, Fireflies Algorithm, Harmony Search, Artificial Life, Digital Organisms, Estimation of Distribution Algorithms, Stochastic Diffusion Search, Quantum Computing, Nano Computing, Membrane Computing, Human-centric Computing, Hybridization of Algorithms, Memetic Computing, Autonomic Computing, Self-organizing systems, Combinatorial, Discrete, Binary, Constrained, Multi-objective, Multi-modal, Dynamic, and Large-scale Optimization.