{"title":"Large language models as surrogate models in evolutionary algorithms: A preliminary study","authors":"Hao Hao , Xiaoqun Zhang , Aimin Zhou","doi":"10.1016/j.swevo.2024.101741","DOIUrl":"10.1016/j.swevo.2024.101741","url":null,"abstract":"<div><div>Large Language Models (LLMs) have demonstrated remarkable advancements across diverse domains, manifesting considerable capabilities in evolutionary computation, notably in generating new solutions and automating algorithm design. Surrogate-assisted selection plays a pivotal role in evolutionary algorithms (EAs), especially in addressing expensive optimization problems by reducing the number of real function evaluations. However, whether LLMs can serve as surrogate models remains an unknown. In this study, we propose a novel surrogate model based purely on LLM inference capabilities, eliminating the need for training. Specifically, we formulate model-assisted selection as a classification problem or a regression problem, utilizing LLMs to directly evaluate the quality of new solutions based on historical data. This involves predicting whether a solution is good or bad, or approximating its value. This approach is then integrated into EAs, termed LLM-assisted EA (LAEA). Detailed experiments compared the visualization results of 2D data from 9 mainstream LLMs, as well as their performance on 5-10 dimensional problems. The experimental results demonstrate that LLMs have significant potential as surrogate models in evolutionary computation, achieving performance comparable to traditional surrogate models only using inference. This work offers new insights into the application of LLMs in evolutionary computation. Code is available at: <span><span>https://github.com/hhyqhh/LAEA.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101741"},"PeriodicalIF":8.2,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent cross-entropy optimizer: A novel machine learning-based meta-heuristic for global optimization","authors":"Salar Farahmand-Tabar, Payam Ashtari","doi":"10.1016/j.swevo.2024.101739","DOIUrl":"10.1016/j.swevo.2024.101739","url":null,"abstract":"<div><div>Machine Learning (ML) features are extensively applied in various domains, notably in the context of Metaheuristic (MH) optimization methods. While MHs are known for their exploitation and exploration capabilities in navigating large and complex search spaces, they are not without their inherent weaknesses. These weaknesses include slow convergence rates and a struggle to strike an optimal balance between exploration and exploitation, as well as the challenge of effective knowledge extraction from complex data. To address these shortcomings, an AI-based global optimization technique is introduced, known as the Intelligent Cross-Entropy Optimizer (ICEO). This method draws inspiration from the concept of Cross Entropy (CE), a strategy that uses Kullback–Leibler or cross-entropy divergence as a measure of closeness between two sampling distributions, and it uses the potential of Machine Learning (ML) to facilitate the extraction of knowledge from the search data to learn and guide dynamically within complex search spaces. ICEO employs the Self-Organizing Map (SOM), to train and map the intricate, high-dimensional relationships within the search space onto a reduced lattice structure. This combination empowers ICEO to effectively address the weaknesses of traditional MH algorithms. To validate the effectiveness of ICEO, a rigorous evaluation involving well-established benchmark functions, including the CEC 2017 test suite, as well as real-world engineering problems have been conducted. A comprehensive statistical analysis, employing the Wilcoxon test, ranks ICEO against other prominent optimization approaches. The results demonstrate the superiority of ICEO in achieving the optimal balance between computational efficiency, precision, and reliability. In particular, it excels in enhancing convergence rates and exploration-exploitation balance.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101739"},"PeriodicalIF":8.2,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A survey of genetic algorithms for clustering: Taxonomy and empirical analysis","authors":"Hermes Robles-Berumen , Amelia Zafra , Sebastián Ventura","doi":"10.1016/j.swevo.2024.101720","DOIUrl":"10.1016/j.swevo.2024.101720","url":null,"abstract":"<div><div>Clustering, an unsupervised learning technique, aims to group patterns into clusters where similar patterns are grouped together, while dissimilar ones are placed in different clusters. This task can present itself as a complex optimization problem due to the extensive search space generated by all potential data partitions. Genetic Algorithms (GAs) have emerged as efficient tools for addressing this task. Consequently, significant advancements and numerous proposals have been developed in this field.</div><div>This work offers a comprehensive and critical review of state-of-the-art mono-objective Genetic Algorithms (GAs) for partitional clustering. From a more theoretical standpoint, it examines 22 well-known proposals in detail, covering their encoding strategies, objective functions, genetic operators, local search methods, and parent selection strategies. Based on this information, a specific taxonomy is proposed. In addition, from a more practical standpoint, a detailed experimental study is conducted to discern the advantages and disadvantages of approaches. Specifically, 22 different cluster validation indices are considered to compare the performance of clustering techniques. This evaluation is performed across 94 datasets encompassing diverse configurations, including the number of classes, separation between classes, and pattern dimensionality. Results reveal interesting findings, such as the key role of local search in optimizing results and reducing search space. Additionally, representations based on centroids and labels demonstrate greater efficiency and crossover and mutation operators do not prove to be as relevant. Ultimately, while the results are satisfactory, real-world clustering problems introduce additional complexity, especially for algorithms aiming to determine the number of clusters, resulting in diminished performance and the need for new approaches to be explored. Code, datasets and instructions to run algorithms in the LEAL library are available in an associated repository, in order to facilitate future experiments in this environment.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101720"},"PeriodicalIF":8.2,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yufei Yang , Changsheng Zhang , Yi Liu , Jiaxu Ning , Ying Guo
{"title":"Deep reinforcement learning assisted novelty search in Voronoi regions for constrained multi-objective optimization","authors":"Yufei Yang , Changsheng Zhang , Yi Liu , Jiaxu Ning , Ying Guo","doi":"10.1016/j.swevo.2024.101732","DOIUrl":"10.1016/j.swevo.2024.101732","url":null,"abstract":"<div><div>Solving constrained multi-objective optimization problems (CMOPs) requires optimizing multiple conflicting objectives while satisfying various constraints. Existing constrained multi-objective evolutionary algorithms (CMOEAs) cross infeasible regions by ignoring constraints. However, these methods might neglect promising search directions, leading to insufficient exploration of the search space. To address this issue, this paper proposes a deep reinforcement learning assisted constrained multi-objective quality-diversity algorithm. The proposed algorithm designs a diversity maintenance mechanism to promote evenly coverage of the final solution set on the constrained Pareto front. Specifically, first, a novelty-oriented archive is created using a centroid Voronoi tessellation, which divides the search space into a desired number of Voronoi regions. Each region acts as a repository of non-dominated solutions with different phenotypic characteristics to provide diversity information and supplementary evolutionary trails. Secondly, to improve resource utilization, a deep Q-network is adopted to learn a policy to select suitable Voronoi regions for offspring generation based on their novelty scores. The exploration of these regions aims to find a set of diverse, high-performing solutions to accelerate convergence and escape local optima. Compared with eight state-of-the-art CMOEAs, experimental studies on four benchmark suites and nine real-world applications demonstrate that the proposed algorithm exhibits superior or at least competitive performance, especially on problems with discrete and narrow feasible regions.</div></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101732"},"PeriodicalIF":8.2,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Industrial activated sludge model identification using hyperparameter-tuned metaheuristics","authors":"Akhil T Nair, M Arivazhagan","doi":"10.1016/j.swevo.2024.101733","DOIUrl":"10.1016/j.swevo.2024.101733","url":null,"abstract":"<div><p>This study focuses on the parameter estimation of an industrial activated sludge model using hyperparameter-tuned metaheuristic techniques. The data used in this study were collected on-site from a textile industry wastewater treatment plant. A Modified Activated Sludge Model (M-ASM) was the 'first-principle model’ selected and implemented with suitable assumptions. Advanced metaheuristic techniques, as Adaptive Tunicate Swarm Optimization (ATSO), Whale Optimization Algorithm (WOA), Rao-3 Optimization (Rao-3) and Driving Training Based Optimization (DTBO) were implemented. The hyperparameter tuning was performed with Bayesian Optimization (BO). Optimized metaheuristic algorithms were implemented for model-parameter identification. The Bayesian optimized Rao-3(BO-Rao-3) algorithm provided the best validation results, with a Mean Absolute Percentage Error (MAPE) value of 7.0141 and Normalized Root Mean Square Error (NRMSE) value of 0.2629. It also had the least execution time. BO-Rao-3 is 0.93% to 4.7% better than the other implemented hyperparameter-tuned metaheuristic techniques.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101733"},"PeriodicalIF":8.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Constrained large-scale multiobjective optimization based on a competitive and cooperative swarm optimizer","authors":"Jinlong Zhou , Yinggui Zhang , Ponnuthurai Nagaratnam Suganthan","doi":"10.1016/j.swevo.2024.101735","DOIUrl":"10.1016/j.swevo.2024.101735","url":null,"abstract":"<div><p>Many engineering application problems can be modeled as constrained multiobjective optimization problems (CMOPs), which have attracted much attention. In solving CMOPs, existing algorithms encounter difficulties in balancing conflicting objectives and constraints. Worse still, the performance of the algorithms deteriorates drastically when the size of the decision variables scales up. To address these issues, this study proposes a competitive and cooperative swarm optimizer for large-scale CMOPs. To balance conflict objectives and constraints, a bidirectional search mechanism based on competitive and cooperative swarms is designed. It involves two swarms, approximating the true Pareto front from two directions. To enhance the search efficiency in large-scale space, we propose a fast-converging competitive swarm optimizer. Unlike existing competitive swarm optimizers, the proposed optimizer updates the velocity and position of all particles at each iteration. Additionally, to reduce the search range of the decision space, a fuzzy decision variables operator is used. Comparison experiments have been performed on test instances with 100–1000 decision variables. Experiments demonstrate the superior performance of the proposed algorithm over five peer algorithms.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101735"},"PeriodicalIF":8.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solving multi-objective robust optimization problems via Stakelberg-based game model","authors":"Adham Salih, Erella Eisenstadt Matalon","doi":"10.1016/j.swevo.2024.101734","DOIUrl":"10.1016/j.swevo.2024.101734","url":null,"abstract":"<div><p>Real-world multi-objective engineering problems frequently involve uncertainties stemming from environmental factors, production inaccuracies, and other sources. A critical aspect of addressing these problems, termed Multi-Objective Robust Optimization (MORO) problems, is the development of solutions that are both optimal and resilient to uncertainties. This paper proposes addressing these uncertainties through the application of Stackelberg game models, a novel approach involving the interaction of two players. The Leader searches for optimal and robust solutions and the Follower generates uncertainties based on the Leader’s chosen solutions. The Follower seeks to tackle the most challenging uncertainties associated with the Leader’s candidate solutions. Additionally, this paper introduces a novel metric to assess the robustness of a given set of solutions concerning specified uncertainties.</p><p>Based on the proposed approach, a co-evolutionary algorithm is developed. A numerical study is then conducted to evaluate the algorithm by comparing its performance with those obtained by four benchmark algorithms on nine benchmark MORO problems. The numerical study also aims to assess its sensitivity to run parameter variations. The experimental results demonstrate the proposed approach’s effectiveness in identifying a non-dominated robust set of solutions.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101734"},"PeriodicalIF":8.2,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaolong Han , Yu Xue , Zehong Wang , Yong Zhang , Anton Muravev , Moncef Gabbouj
{"title":"SaDENAS: A self-adaptive differential evolution algorithm for neural architecture search","authors":"Xiaolong Han , Yu Xue , Zehong Wang , Yong Zhang , Anton Muravev , Moncef Gabbouj","doi":"10.1016/j.swevo.2024.101736","DOIUrl":"10.1016/j.swevo.2024.101736","url":null,"abstract":"<div><p>Evolutionary neural architecture search (ENAS) and differentiable architecture search (DARTS) are all prominent algorithms in neural architecture search, enabling the automated design of deep neural networks. To leverage the strengths of both methods, there exists a framework called continuous ENAS, which alternates between using gradient descent to optimize the supernet and employing evolutionary algorithms to optimize the architectural encodings. However, in continuous ENAS, there exists a premature convergence issue accompanied by the small model trap, which is a common issue in NAS. To address this issue, this paper proposes a self-adaptive differential evolution algorithm for neural architecture search (SaDENAS), which can reduce the interference caused by small models to other individuals during the optimization process, thereby avoiding premature convergence. Specifically, SaDENAS treats architectures within the search space as architectural encodings, leveraging vector differences between encodings as the basis for evolutionary operators. To achieve a trade-off between exploration and exploitation, we integrate both local and global search strategies with a mutation scaling factor to adaptively balance these two strategies. Empirical findings demonstrate that our proposed algorithm achieves better performance with superior convergence compared to other algorithms.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101736"},"PeriodicalIF":8.2,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zeyuan Yan , Yuren Zhou , Wei Zheng , Chupeng Su , Weigang Wu
{"title":"A dimensionality reduction assisted evolutionary algorithm for high-dimensional expensive multi/many-objective optimization","authors":"Zeyuan Yan , Yuren Zhou , Wei Zheng , Chupeng Su , Weigang Wu","doi":"10.1016/j.swevo.2024.101729","DOIUrl":"10.1016/j.swevo.2024.101729","url":null,"abstract":"<div><p>Surrogate-assisted multi/many-objective evolutionary algorithms (SA-MOEAs) have shown significant progress in tackling expensive optimization problems. However, existing research primarily focuses on low-dimensional optimization problems. The main reason lies in the fact that some surrogate techniques used in SA-MOEAs, such as the Kriging model, are not applicable for exploring high-dimensional decision space. This paper introduces a surrogate-assisted multi-objective evolutionary algorithm with dimensionality reduction to address high-dimensional expensive optimization problems. The proposed algorithm includes two key insights. Firstly, we propose a dimensionality reduction framework containing three different feature extraction algorithms and a feature drift strategy to map the high-dimensional decision space into a low-dimensional decision space; this strategy helps to improve the robustness of surrogates. Secondly, we propose a sub-region search strategy to define a series of promising sub-regions in the high-dimensional decision space; this strategy helps to improve the exploration ability of the proposed SA-MOEA. Experimental results demonstrate the effectiveness of our proposed algorithm in comparison to several state-of-the-art algorithms.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101729"},"PeriodicalIF":8.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donglin Zhu , Jiaying Shen , Yuemai Zhang , Weijie Li , Xingyun Zhu , Changjun Zhou , Shi Cheng , Yilin Yao
{"title":"Multi-strategy particle swarm optimization with adaptive forgetting for base station layout","authors":"Donglin Zhu , Jiaying Shen , Yuemai Zhang , Weijie Li , Xingyun Zhu , Changjun Zhou , Shi Cheng , Yilin Yao","doi":"10.1016/j.swevo.2024.101737","DOIUrl":"10.1016/j.swevo.2024.101737","url":null,"abstract":"<div><p>With the advent of 6G communication technology, user expectations for service quality have correspondingly risen. This is particularly evident in rural areas, where the challenge of ensuring signal coverage across diverse terrains is pressing. Consequently, the intelligent placement of base stations becomes a critical issue. To address this, our paper conducts a comprehensive analysis of terrain environments and village distributions in rural settings and develops a sophisticated objective function. We introduce a novel approach termed Multi-strategy Particle Swarm Optimization with Adaptive Forgetting (AFMPSO), designed to optimize the layout of base stations. This algorithm incorporates a forgetting mechanism and a center-of-mass traction strategy, which enable particles to update their positions responsively and maintain optimal individual information. Such features effectively prevent premature convergence and the risk of entrapment in local optima, thereby enhancing the efficacy of traditional particle swarm optimization techniques. In the IEEE Congress on Evolutionary Computation (CEC) 2022, AFMPSO was benchmarked against other particle swarm variants and the year’s winning algorithm. It demonstrated superior optimization capabilities. Further, our experiments utilizing both fixed and randomly configured village models revealed that AFMPSO achieved a signal coverage rate exceeding 90% in both setups, underscoring its substantial advantages and practical applicability in enhancing base station coverage. This research not only delivers an effective technical solution but also establishes a robust foundation for the future development of intelligent base station layouts.</p></div>","PeriodicalId":48682,"journal":{"name":"Swarm and Evolutionary Computation","volume":"91 ","pages":"Article 101737"},"PeriodicalIF":8.2,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}