Marc Kaufmann;Maxime Larcher;Johannes Lengler;Xun Zou
{"title":"OneMax Is Not the Easiest Function for Fitness Improvements","authors":"Marc Kaufmann;Maxime Larcher;Johannes Lengler;Xun Zou","doi":"10.1162/evco_a_00348","DOIUrl":"10.1162/evco_a_00348","url":null,"abstract":"We study the (1:s+1) success rule for controlling the population size of the (1,λ)-EA. It was shown by Hevia Fajardo and Sudholt that this parameter control mechanism can run into problems for large s if the fitness landscape is too easy. They conjectured that this problem is worst for the OneMax benchmark, since in some well-established sense OneMax is known to be the easiest fitness landscape. In this paper, we disprove this conjecture. We show that there exist s and ɛ such that the self-adjusting (1,λ)-EA with the (1:s+1)-rule optimizes OneMax efficiently when started with ɛn zero-bits, but does not find the optimum in polynomial time on Dynamic BinVal. Hence, we show that there are landscapes where the problem of the (1:s+1)-rule for controlling the population size of the (1,λ)-EA is more severe than for OneMax. The key insight is that, while OneMax is the easiest function for decreasing the distance to the optimum, it is not the easiest fitness landscape with respect to finding fitness-improving steps.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 1","pages":"27-54"},"PeriodicalIF":4.6,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140295208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synthesising Diverse and Discriminatory Sets of Instances Using Novelty Search in Combinatorial Domains","authors":"Alejandro Marrero;Eduardo Segredo;Coromoto León;Emma Hart","doi":"10.1162/evco_a_00350","DOIUrl":"10.1162/evco_a_00350","url":null,"abstract":"Gathering sufficient instance data to either train algorithm-selection models or understand algorithm footprints within an instance space can be challenging. We propose an approach to generating synthetic instances that are tailored to perform well with respect to a target algorithm belonging to a predefined portfolio but are also diverse with respect to their features. Our approach uses a novelty search algorithm with a linearly weighted fitness function that balances novelty and performance to generate a large set of diverse and discriminatory instances in a single run of the algorithm. We consider two definitions of novelty: (1) with respect to discriminatory performance within a portfolio of solvers; (2) with respect to the features of the evolved instances. We evaluate the proposed method with respect to its ability to generate diverse and discriminatory instances in two domains (knapsack and bin-packing), comparing to another well-known quality diversity method, Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) and an evolutionary algorithm that only evolves for discriminatory behaviour. The results demonstrate that the novelty search method outperforms its competitors in terms of coverage of the space and its ability to generate instances that are diverse regarding the relative size of the “performance gap” between the target solver and the remaining solvers in the portfolio. Moreover, for the Knapsack domain, we also show that we are able to generate novel instances in regions of an instance space not covered by existing benchmarks using a portfolio of state-of-the-art solvers. Finally, we demonstrate that the method is robust to different portfolios of solvers (stochastic approaches, deterministic heuristics, and state-of-the-art methods), thereby providing further evidence of its generality.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 1","pages":"55-90"},"PeriodicalIF":4.6,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Drift Analysis with Fitness Levels for Elitist Evolutionary Algorithms","authors":"Jun He;Yuren Zhou","doi":"10.1162/evco_a_00349","DOIUrl":"10.1162/evco_a_00349","url":null,"abstract":"The fitness level method is a popular tool for analyzing the hitting time of elitist evolutionary algorithms. Its idea is to divide the search space into multiple fitness levels and estimate lower and upper bounds on the hitting time using transition probabilities between fitness levels. However, the lower bound generated by this method is often loose. An open question regarding the fitness level method is what are the tightest lower and upper time bounds that can be constructed based on transition probabilities between fitness levels. To answer this question, we combine drift analysis with fitness levels and define the tightest bound problem as a constrained multiobjective optimization problem subject to fitness levels. The tightest metric bounds by fitness levels are constructed and proven for the first time. Then linear bounds are derived from metric bounds and a framework is established that can be used to develop different fitness level methods for different types of linear bounds. The framework is generic and promising, as it can be used to draw tight time bounds on both fitness landscapes with and without shortcuts. This is demonstrated in the example of the (1+1) EA maximizing the TwoMax1 function.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 1","pages":"1-25"},"PeriodicalIF":4.6,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140295207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isidro M. Alvarez;Trung B. Nguyen;Will N. Browne;Mengjie Zhang
{"title":"A Layered Learning Approach to Scaling in Learning Classifier Systems for Boolean Problems","authors":"Isidro M. Alvarez;Trung B. Nguyen;Will N. Browne;Mengjie Zhang","doi":"10.1162/evco_a_00351","DOIUrl":"10.1162/evco_a_00351","url":null,"abstract":"Evolutionary Computation (EC) often throws away learned knowledge as it is reset for each new problem addressed. Conversely, humans can learn from small-scale problems, retain this knowledge (plus functionality), and then successfully reuse them in larger-scale and/or related problems. Linking solutions to problems has been achieved through layered learning, where an experimenter sets a series of simpler related problems to solve a more complex task. Recent works on Learning Classifier Systems (LCSs) has shown that knowledge reuse through the adoption of Code Fragments, GP-like tree-based programs, is plausible. However, random reuse is inefficient. Thus, the research question is how LCS can adopt a layered-learning framework, such that increasingly complex problems can be solved efficiently. An LCS (named XCSCF*) has been developed to include the required base axioms necessary for learning, refined methods for transfer learning and learning recast as a decomposition into a series of subordinate problems. These subordinate problems can be set as a curriculum by a teacher, but this does not mean that an agent can learn from it; especially if it only extracts over-fitted knowledge of each problem rather than the underlying scalable patterns and functions. Results show that from a conventional tabula rasa, with only a vague notion of which subordinate problems might be relevant, XCSCF* captures the general logic behind the tested domains and therefore can solve any n-bit Multiplexer, n-bit Carry-one, n-bit Majority-on, and n-bit Even-parity problems. This work demonstrates a step towards continual learning as learned knowledge is effectively reused in subsequent problems.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"33 1","pages":"115-140"},"PeriodicalIF":4.6,"publicationDate":"2025-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Solving Many-objective Optimization Problems based on PF Shape Classification and Vector Angle Selection.","authors":"Y T Wu, F Z Ge, D B Chen, L Shi","doi":"10.1162/evco_a_00373","DOIUrl":"https://doi.org/10.1162/evco_a_00373","url":null,"abstract":"<p><p>Most many-objective optimization algorithms (MaOEAs) adopt a pre-assumed Pareto front (PF) shape, instead of the true PF shape, to balance convergence and diversity in high-dimensional objective space, resulting in insufficient selection pressure and poor performance. To address these shortcomings, we propose MaOEA-PV based on PF shape classification and vector angle selection. The three innovation points of this paper are as follows: (I) a new method for PF classification; (II) a new fitness function that combines convergence and diversity indicators, thereby enhancing the quality of parents during mating selection; and (III) the selection of individuals exhibiting the best convergence to add to the population, overcoming the lack of selection pressure during environmental selection. Subsequently, the max-min vector angle strategy is employed. The solutions with the highest diversity and the least convergence are selected based on the max and min vector angles, respectively, which balances convergence and diversity. The performance of algorithm is compared with those of five state-of-the-art MaOEAs on 41 test problems and 5 real-world problems comprising as many 15 objectives. The experimental results demonstrate the competitive and effective nature of the proposed algorithm.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-42"},"PeriodicalIF":4.6,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeroen G Rook, Carolin Benjamins, Jakob Bossek, Heike Trautmann, Holger H Hoos, Marius Lindauer
{"title":"MO-SMAC: Multi-objective Sequential Model-based Algorithm Configuration.","authors":"Jeroen G Rook, Carolin Benjamins, Jakob Bossek, Heike Trautmann, Holger H Hoos, Marius Lindauer","doi":"10.1162/evco_a_00371","DOIUrl":"https://doi.org/10.1162/evco_a_00371","url":null,"abstract":"<p><p>Automated algorithm configuration aims at finding well-performing parameter configurations for a given problem, and it has proven to be effective within many AI domains, including evolutionary computation. Initially, the focus was on excelling in one performance objective, but, in reality, most tasks have a variety of (conflicting) objectives. The surging demand for trustworthy and resource-efficient AI systems makes this multi-objective perspective even more prevalent. We propose a new general-purpose multi-objective automated algorithm configurator by extending the widely-used SMAC framework. Instead of finding a single configuration, we search for a non-dominated set that approximates the actual Pareto set. We propose a pure multi-objective Bayesian Optimisation approach for obtaining promising configurations by using the predicted hypervolume improvement as acquisition function. We also present a novel intensification procedure to efficiently handle the selection of configurations in a multi-objective context. Our approach is empirically validated and compared across various configuration scenarios in four AI domains, demonstrating superiority over baseline methods, competitiveness with MO-ParamILS on individual scenarios and an overall best performance.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-25"},"PeriodicalIF":4.6,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143651775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gjorgjina Cenikj, Gašper Petelin, Carola Doerr, Peter Korošec, Tome Eftimov
{"title":"Beyond Landscape Analysis: DynamoRep Features For Capturing Algorithm-Problem Interaction In Single-Objective Continuous Optimization.","authors":"Gjorgjina Cenikj, Gašper Petelin, Carola Doerr, Peter Korošec, Tome Eftimov","doi":"10.1162/evco_a_00370","DOIUrl":"https://doi.org/10.1162/evco_a_00370","url":null,"abstract":"<p><p>The representation of optimization problems and algorithms in terms of numerical features is a well-established tool for comparing optimization problem instances, for analyzing the behavior of optimization algorithms, and the quality of existing problem benchmarks, as well as for automated per-instance algorithm selection and configuration approaches. Extending purely problem-centered feature collections, our recently proposed DynamoRep features provide a simple and inexpensive representation of the algorithmproblem interaction during the optimization process. In this paper, we conduct a comprehensive analysis of the predictive power of the DynamoRep features for the problem classification, algorithm selection, and algorithm classification tasks. In particular, the features are evaluated for the classification of problem instances into problem classes from the BBOB (Black Box Optimization Benchmarking) suite, selecting the best algorithm to solve a given problem from a portfolio of three algorithms (Differential Evolution, Evolutionary Strategy, and Particle Swarm Optimization), as well as distinguishing these algorithms based on their trajectories. We show that, despite being much cheaper to compute, they can yield results comparable to those using state-ofthe-art Exploratory Landscape Analysis features.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-28"},"PeriodicalIF":4.6,"publicationDate":"2025-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moritz Vinzent Seiler, Pascal Kerschke, Heike Trautmann
{"title":"Deep-ELA: Deep Exploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multi-Objective Continuous Optimization Problems.","authors":"Moritz Vinzent Seiler, Pascal Kerschke, Heike Trautmann","doi":"10.1162/evco_a_00367","DOIUrl":"https://doi.org/10.1162/evco_a_00367","url":null,"abstract":"<p><p>In many recent works,the potential of Exploratory Landscape Analysis (ELA) features to numerically characterize single-objective continuous optimization problems has been demonstrated. These numerical features provide the input for all kinds of machine learning tasks in the domain of continuous optimization problems, ranging, i.a., from High-level Property Prediction to Automated Algorithm Selection and Automated Algorithm Configuration. Without ELA features, analyzing and understanding the characteristics of single-objective continuous optimization problems is - to the best of our knowledge - very limited. Yet, despite their usefulness, as demonstrated in several past works, ELA features suffer from several drawbacks. These include, in particular, (1.) a strong correlation between multiple features, as well as (2.) its very limited applicability to multiobjective continuous optimization problems. As a remedy, recent works proposed deep learning-based approaches as alternatives to ELA. In these works, among others point-cloud transformers were used to characterize an optimization problem's fitness landscape. However, these approaches require a large amount of labeled training data. Within this work, we propose a hybrid approach, Deep-ELA, which combines (the benefits of) deep learning and ELA features. We pre-trained four transformers on millions of randomly generated optimization problems to learn deep representations of the landscapes of continuous single- and multi-objective optimization problems. Our proposed framework can either be used out-of-the-box for analyzing single- and multiobjective continuous optimization problems, or subsequently fine-tuned to various tasks focusing on algorithm behavior and problem understanding.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":4.6,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Survey of interactive evolutionary decomposition-based multiobjective optimization methods.","authors":"Giomara Lárraga, Kaisa Miettinen","doi":"10.1162/evco_a_00366","DOIUrl":"https://doi.org/10.1162/evco_a_00366","url":null,"abstract":"<p><p>Interactive methods support decision-makers in finding the most preferred solution for multiobjective optimization problems, where multiple conflicting objective functions must be optimized simultaneously. These methods let a decision-maker provide preference information iteratively during the solution process to find solutions of interest, allowing them to learn about the trade-offs in the problem and the feasibility of the preferences. Several interactive evolutionary multiobjective optimization methods have been proposed in the literature. In the evolutionary computation community, the so-called decomposition-basedmethods have been increasingly popular because of their good performance in problems with many objective functions. They decompose the multiobjective optimization problem into multiple sub-problems to be solved collaboratively. Various interactive versions of decomposition-based methods have been proposed. However, most of them do not consider the desirable properties of real interactive solution processes, such as avoiding imposing a high cognitive burden on the decision-maker, allowing them to decide when to interact with the method, and supporting them in selecting a final solution. This paper reviews interactive evolutionary decomposition-based multiobjective optimization methods and different methodologies utilized to incorporate interactivity in them. Additionally, desirable properties of interactive decomposition-based multiobjective evolutionary optimization methods are identified, aiming to make them easier to be applied in real-world problems.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-39"},"PeriodicalIF":4.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Runtime Analysis of Typical Decomposition Approaches in MOEA/D for Many-Objective Optimization Problems.","authors":"Zhengxin Huang, Yunren Zhou, Zefeng Chen, Qianlong Dang","doi":"10.1162/evco_a_00364","DOIUrl":"https://doi.org/10.1162/evco_a_00364","url":null,"abstract":"<p><p>Decomposition-based multi-objective evolutionary algorithms (MOEAs) are popular methods utilized to address many-objective optimization problems (MaOPs). These algorithms decompose the original MaOP into several scalar optimization subproblems, and solve them to obtain a set of solutions to approximate the Pareto front (PF). The decomposition approach is an important component in them. This paper presents a runtime analysis of a MOEA based on the classic decomposition framework using the typical weighted sum (WS), Tchebycheff (TCH), and penalty-based boundary intersection (PBI) approaches to obtain an optimal solution for any subproblem of two pseudo-Boolean benchmark MaOPs, namely mLOTZ and mCOCZ. Due to the complexity and limitation of the theoretical analysis techniques, the analyzed algorithm employs one-bit mutation to generate offspring individuals. The results indicate that when using WS, the analyzed algorithm can consistently find an optimal solution for every subproblem, which is located in the PF, in polynomial expected runtime. In contrast, the algorithm requires at least exponential expected runtime (with respect to the number of objectives m) for certain subproblems when using TCH or PBI, even though the landscapes of all objective functions in the two benchmarks are strictly monotone. Moreover, this analysis reveals a drawback of using WS: the optimal solutions obtained by solving subproblems are more easily mapped to the same point in the PF, compared to the case of using TCH. When using PBI, a smaller value of the penalty parameter is a good choice for faster convergence to the PF but may compromise diversity. To further understand the impact of these approaches in practical algorithms, numerical experiments on using bit-wise mutation to generate offspring individuals are conducted. The findings of this study may be helpful for designing more efficient decomposition approaches for MOEAs in future research.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-32"},"PeriodicalIF":4.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}