Johann Huber, François Helenon, Miranda Coninx, Faïz Ben Amar, Stéphane Doncieux
{"title":"Quality Diversity under Sparse Interaction and Sparse Reward: Application to Grasping in Robotics.","authors":"Johann Huber, François Helenon, Miranda Coninx, Faïz Ben Amar, Stéphane Doncieux","doi":"10.1162/evco_a_00363","DOIUrl":"https://doi.org/10.1162/evco_a_00363","url":null,"abstract":"<p><p>Quality-Diversity (QD) methods are algorithms that aim to generate a set of diverse and highperforming solutions to a given problem. Originally developed for evolutionary robotics, most QD studies are conducted on a limited set of domains'mainly applied to locomotion, where the fitness and the behavior signal are dense. Grasping is a crucial task for manipulation in robotics. Despite the efforts of many research communities, this task is yet to be solved. Grasping cumulates unprecedented challenges in QD literature: it suffers from reward sparsity, behavioral sparsity, and behavior space misalignment. The present work studies how QD can address grasping. Experiments have been conducted on 15 different methods on 10 grasping domains, corresponding to 2 different robot-gripper setups and 5 standard objects. The obtained results show that MAP-Elites variants that select successful solutions in priority outperform all the compared methods on the studied metrics by a large margin. We also found experimental evidence that sparse interaction can lead to deceptive novelty. To our knowledge, the ability to efficiently produce examples of grasping trajectories demonstrated in this work has no precedent in the literature.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-30"},"PeriodicalIF":4.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Cost of Randomness in Evolutionary Algorithms: Crossover Can Save Random Bits.","authors":"Carlo Kneissl, Dirk Sudholt","doi":"10.1162/evco_a_00365","DOIUrl":"https://doi.org/10.1162/evco_a_00365","url":null,"abstract":"<p><p>Evolutionary algorithms make countless random decisions during selection, mutation and crossover operations. These random decisions require a steady stream of random numbers. We analyze the expected number of random bits used throughout a run of an evolutionary algorithm and refer to this as the cost of randomness. We give general bounds on the cost of randomness for mutation-based evolutionary algorithms using 1-bit flips or standard mutations using either a naive or a common, more efficient implementation that uses Θ(logn) random bits per mutation. Uniform crossover is a potentially wasteful operator as the number of random bits used equals the Hamming distance of the two parents, which can be up to n. However, we show for a (2+1) Genetic Algorithm that is known to optimize the test function ONEMAX in roughly (e/2)nlnn expected evaluations, twice as fast as the fastest mutation-based evolutionary algorithms, that the total cost of randomness during all crossover operations on ONEMAX is only Θ(n). A more pronounced effect is shown for the common test function JUMPk, where there is an asymptotic decrease both in the number of evaluations and in the cost of randomness. Consequently, the use of crossover can reduce the cost of randomness below that of the fastest evolutionary algorithms that only use standard mutations.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-29"},"PeriodicalIF":4.6,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryan Boldi;Martin Briesch;Dominik Sobania;Alexander Lalejini;Thomas Helmuth;Franz Rothlauf;Charles Ofria;Lee Spector
{"title":"Informed Down-Sampled Lexicase Selection: Identifying Productive Training Cases for Efficient Problem Solving","authors":"Ryan Boldi;Martin Briesch;Dominik Sobania;Alexander Lalejini;Thomas Helmuth;Franz Rothlauf;Charles Ofria;Lee Spector","doi":"10.1162/evco_a_00346","DOIUrl":"10.1162/evco_a_00346","url":null,"abstract":"Genetic Programming (GP) often uses large training sets and requires all individuals to be evaluated on all training cases during selection. Random down-sampled lexicase selection evaluates individuals on only a random subset of the training cases, allowing for more individuals to be explored with the same number of program executions. However, sampling randomly can exclude important cases from the down-sample for a number of generations, while cases that measure the same behavior (synonymous cases) may be overused. In this work, we introduce Informed Down-Sampled Lexicase Selection. This method leverages population statistics to build down-samples that contain more distinct and therefore informative training cases. Through an empirical investigation across two different GP systems (PushGP and Grammar-Guided GP), we find that informed down-sampling significantly outperforms random down-sampling on a set of contemporary program synthesis benchmark problems. Through an analysis of the created down-samples, we find that important training cases are included in the down-sample consistently across independent evolutionary runs and systems. We hypothesize that this improvement can be attributed to the ability of Informed Down-Sampled Lexicase Selection to maintain more specialist individuals over the course of evolution, while still benefiting from reduced per-evaluation costs.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 4","pages":"307-337"},"PeriodicalIF":4.6,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139562620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pablo Ramos Criado;D. Barrios Rolanía;David de la Hoz;Daniel Manrique
{"title":"Estimation of Distribution Algorithm for Grammar-Guided Genetic Programming","authors":"Pablo Ramos Criado;D. Barrios Rolanía;David de la Hoz;Daniel Manrique","doi":"10.1162/evco_a_00345","DOIUrl":"10.1162/evco_a_00345","url":null,"abstract":"Genetic variation operators in grammar-guided genetic programming are fundamental to guide the evolutionary process in search and optimization problems. However, they show some limitations, mainly derived from an unbalanced exploration and local-search trade-off. This paper presents an estimation of distribution algorithm for grammar-guided genetic programming to overcome this difficulty and thus increase the performance of the evolutionary algorithm. Our proposal employs an extended dynamic stochastic context-free grammar to encode and calculate the estimation of the distribution of the search space from some promising individuals in the population. Unlike traditional estimation of distribution algorithms, the proposed approach improves exploratory behavior by smoothing the estimated distribution model. Therefore, this algorithm is referred to as SEDA, smoothed estimation of distribution algorithm. Experiments have been conducted to compare overall performance using a typical genetic programming crossover operator, an incremental estimation of distribution algorithm, and the proposed approach after tuning their hyperparameters. These experiments involve challenging problems to test the local search and exploration features of the three evolutionary systems. The results show that grammar-guided genetic programming with SEDA achieves the most accurate solutions with an intermediate convergence speed.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 4","pages":"339-370"},"PeriodicalIF":4.6,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139565374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Territorial Differential Meta-Evolution: An Algorithm for Seeking All the Desirable Optima of a Multivariable Function","authors":"Richard Wehr;Scott R. Saleska","doi":"10.1162/evco_a_00337","DOIUrl":"10.1162/evco_a_00337","url":null,"abstract":"Territorial Differential Meta-Evolution (TDME) is an efficient, versatile, and reliable algorithm for seeking all the global or desirable local optima of a multivariable function. It employs a progressive niching mechanism to optimize even challenging, high-dimensional functions with multiple global optima and misleading local optima. This paper introduces TDME and uses standard and novel benchmark problems to quantify its advantages over HillVallEA, which is the best-performing algorithm on the standard benchmark suite that has been used by all major multimodal optimization competitions since 2013. TDME matches HillVallEA on that benchmark suite and categorically outperforms it on a more comprehensive suite that better reflects the potential diversity of optimization problems. TDME achieves that performance without any problem-specific parameter tuning.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 4","pages":"399-426"},"PeriodicalIF":4.6,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9726877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual Position Guided Strategy for Particle Swarm Optimization Algorithms on Multimodal Problems","authors":"Chao Li;Jun Sun;Li-Wei Li;Min Shan;Vasile Palade;Xiaojun Wu","doi":"10.1162/evco_a_00352","DOIUrl":"10.1162/evco_a_00352","url":null,"abstract":"Premature convergence is a thorny problem for particle swarm optimization (PSO) algorithms, especially on multimodal problems, where maintaining swarm diversity is crucial. However, most enhancement strategies for PSO, including the existing diversity-guided strategies, have not fully addressed this issue. This paper proposes the virtual position guided (VPG) strategy for PSO algorithms. The VPG strategy calculates diversity values for two different populations and establishes a diversity baseline. It then dynamically guides the algorithm to conduct different search behaviors, through three phases—divergence, normal, and acceleration—in each iteration, based on the relationships among these diversity values and the baseline. Collectively, these phases orchestrate different schemes to balance exploration and exploitation, collaboratively steering the algorithm away from local optima and towards enhanced solution quality. The introduction of “virtual position” caters to the strategy's adaptability across various PSO algorithms, ensuring the generality and effectiveness of the proposed VPG strategy. With a single hyperparameter and a recommended usual setup, VPG is easy to implement. The experimental results demonstrate that the VPG strategy is superior to several canonical and the state-of-the-art strategies for diversity guidance, and is effective in improving the search performance of most PSO algorithms on multimodal problems of various dimensionalities.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 4","pages":"427-458"},"PeriodicalIF":4.6,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141082836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arkadiy Dushatskiy;Marco Virgolin;Anton Bouter;Dirk Thierens;Peter A. N. Bosman
{"title":"Parameterless Gene-Pool Optimal Mixing Evolutionary Algorithms","authors":"Arkadiy Dushatskiy;Marco Virgolin;Anton Bouter;Dirk Thierens;Peter A. N. Bosman","doi":"10.1162/evco_a_00338","DOIUrl":"10.1162/evco_a_00338","url":null,"abstract":"When it comes to solving optimization problems with evolutionary algorithms (EAs) in a reliable and scalable manner, detecting and exploiting linkage information, that is, dependencies between variables, can be key. In this paper, we present the latest version of, and propose substantial enhancements to, the gene-pool optimal mixing evolutionary algorithm (GOMEA): an EA explicitly designed to estimate and exploit linkage information. We begin by performing a large-scale search over several GOMEA design choices to understand what matters most and obtain a generally best-performing version of the algorithm. Next, we introduce a novel version of GOMEA, called CGOMEA, where linkage-based variation is further improved by filtering solution mating based on conditional dependencies. We compare our latest version of GOMEA, the newly introduced CGOMEA, and another contending linkage-aware EA, DSMGA-II, in an extensive experimental evaluation, involving a benchmark set of nine black-box problems that can be solved efficiently only if their inherent dependency structure is unveiled and exploited. Finally, in an attempt to make EAs more usable and resilient to parameter choices, we investigate the performance of different automatic population management schemes for GOMEA and CGOMEA, de facto making the EAs parameterless. Our results show that GOMEA and CGOMEA significantly outperform the original GOMEA and DSMGA-II on most problems, setting a new state of the art for the field.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 4","pages":"371-397"},"PeriodicalIF":4.6,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10104132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Genetic Programming-based Feature Selection for Symbolic Regression on Incomplete Data.","authors":"Baligh Al-Helali, Qi Chen, Bing Xue, Mengjie Zhang","doi":"10.1162/evco_a_00362","DOIUrl":"https://doi.org/10.1162/evco_a_00362","url":null,"abstract":"<p><p>High-dimensionality is one of the serious real-world data challenges in symbolic regression and it is more challenging if the data are incomplete. Genetic programming has been successfully utilised for high-dimensional tasks due to its natural feature selection ability, but it is not directly applicable to incomplete data. Commonly, it needs to impute the missing values first and then perform genetic programming on the imputed complete data. However, in the case of having many irrelevant features being incomplete, intuitively, it is not necessary to perform costly imputations on such features. For this purpose, this work proposes a genetic programming-based approach to select features directly from incomplete high-dimensional data to improve symbolic regression performance. We extend the concept of identity/neutral elements from mathematics into the function operators of genetic programming, thus they can handle the missing values in incomplete data. Experiments have been conducted on a number of data sets considering different missingness ratios in high-dimensional symbolic regression tasks. The results show that the proposed method leads to better symbolic regression results when compared with state-of-the-art methods that can select features directly from incomplete data. Further results show that our approach not only leads to better symbolic regression accuracy but also selects a smaller number of relevant features, and consequently improves both the effectiveness and the efficiency of the learning process.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-27"},"PeriodicalIF":4.6,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Giuseppe Paolo;Miranda Coninx;Alban Laflaquière;Stephane Doncieux
{"title":"Discovering and Exploiting Sparse Rewards in a Learned Behavior Space","authors":"Giuseppe Paolo;Miranda Coninx;Alban Laflaquière;Stephane Doncieux","doi":"10.1162/evco_a_00343","DOIUrl":"10.1162/evco_a_00343","url":null,"abstract":"Learning optimal policies in sparse rewards settings is difficult as the learning agent has little to no feedback on the quality of its actions. In these situations, a good strategy is to focus on exploration, hopefully leading to the discovery of a reward signal to improve on. A learning algorithm capable of dealing with this kind of setting has to be able to (1) explore possible agent behaviors and (2) exploit any possible discovered reward. Exploration algorithms have been proposed that require the definition of a low-dimension behavior space, in which the behavior generated by the agent's policy can be represented. The need to design a priori this space such that it is worth exploring is a major limitation of these algorithms. In this work, we introduce STAX, an algorithm designed to learn a behavior space on-the-fly and to explore it while optimizing any reward discovered (see Figure 1). It does so by separating the exploration and learning of the behavior space from the exploitation of the reward through an alternating two-step process. In the first step, STAX builds a repertoire of diverse policies while learning a low-dimensional representation of the high-dimensional observations generated during the policies evaluation. In the exploitation step, emitters optimize the performance of the discovered rewarding solutions. Experiments conducted on three different sparse reward environments show that STAX performs comparably to existing baselines while requiring much less prior information about the task as it autonomously builds the behavior space it explores.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 3","pages":"275-305"},"PeriodicalIF":4.6,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41171496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preliminary Analysis of Simple Novelty Search","authors":"R. Paul Wiegand","doi":"10.1162/evco_a_00340","DOIUrl":"10.1162/evco_a_00340","url":null,"abstract":"Novelty search is a powerful tool for finding diverse sets of objects in complicated spaces. Recent experiments on simplified versions of novelty search introduce the idea that novelty search happens at the level of the archive space, rather than individual points. The sparseness measure and archive update criterion create a process that is driven by a two measures: (1) spread out to cover the space while trying to remain as efficiently packed as possible, and (2) metrics inspired by k nearest neighbor theory. In this paper, we generalize previous simplifications of novelty search to include traditional population (μ,λ) dynamics for generating new search points, where the population and the archive are updated separately. We provide some theoretical guidance regarding balancing mutation and sparseness criteria and introduce the concept of saturation as a way of talking about fully covered spaces. We show empirically that claims that novelty search is inherently objectiveless are incorrect. We leverage the understanding of novelty search as an optimizer of archive coverage, suggest several ways to improve the search, and demonstrate one simple improvement—generating some new points directly from the archive rather than the parent population.","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 3","pages":"249-273"},"PeriodicalIF":4.6,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9828886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}