{"title":"Neural Architecture Search Using Covariance Matrix Adaptation Evolution Strategy.","authors":"Nilotpal Sinha, Kuan-Wen Chen","doi":"10.1162/evco_a_00331","DOIUrl":"10.1162/evco_a_00331","url":null,"abstract":"<p><p>Evolution-based neural architecture search methods have shown promising results, but they require high computational resources because these methods involve training each candidate architecture from scratch and then evaluating its fitness, which results in long search time. Covariance Matrix Adaptation Evolution Strategy (CMA-ES) has shown promising results in tuning hyperparameters of neural networks but has not been used for neural architecture search. In this work, we propose a framework called CMANAS which applies the faster convergence property of CMA-ES to the deep neural architecture search problem. Instead of training each individual architecture seperately, we used the accuracy of a trained one shot model (OSM) on the validation data as a prediction of the fitness of the architecture, resulting in reduced search time. We also used an architecture-fitness table (AF table) for keeping a record of the already evaluated architecture, thus further reducing the search time. The architectures are modeled using a normal distribution, which is updated using CMA-ES based on the fitness of the sampled population. Experimentally, CMANAS achieves better results than previous evolution-based methods while reducing the search time significantly. The effectiveness of CMANAS is shown on two different search spaces using four datasets: CIFAR-10, CIFAR-100, ImageNet, and ImageNet16-120. All the results show that CMANAS is a viable alternative to previous evolution-based methods and extends the application of CMA-ES to the deep neural architecture search field.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"177-204"},"PeriodicalIF":6.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9424655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On Single-Objective Sub-Graph-Based Mutation for Solving the Bi-Objective Minimum Spanning Tree Problem.","authors":"Jakob Bossek, Christian Grimme","doi":"10.1162/evco_a_00335","DOIUrl":"10.1162/evco_a_00335","url":null,"abstract":"<p><p>We contribute to the efficient approximation of the Pareto-set for the classical NP-hard multiobjective minimum spanning tree problem (moMST) adopting evolutionary computation. More precisely, by building upon preliminary work, we analyze the neighborhood structure of Pareto-optimal spanning trees and design several highly biased sub-graph-based mutation operators founded on the gained insights. In a nutshell, these operators replace (un)connected sub-trees of candidate solutions with locally optimal sub-trees. The latter (biased) step is realized by applying Kruskal's single-objective MST algorithm to a weighted sum scalarization of a sub-graph. We prove runtime complexity results for the introduced operators and investigate the desirable Pareto-beneficial property. This property states that mutants cannot be dominated by their parent. Moreover, we perform an extensive experimental benchmark study to showcase the operator's practical suitability. Our results confirm that the sub-graph-based operators beat baseline algorithms from the literature even with severely restricted computational budget in terms of function evaluations on four different classes of complete graphs with different shapes of the Pareto-front.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"143-175"},"PeriodicalIF":6.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9967379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Role of Morphological Variation in Evolutionary Robotics: Maximizing Performance and Robustness.","authors":"Jonata Tyska Carvalho, Stefano Nolfi","doi":"10.1162/evco_a_00336","DOIUrl":"10.1162/evco_a_00336","url":null,"abstract":"<p><p>Exposing an evolutionary algorithm that is used to evolve robot controllers to variable conditions is necessary to obtain solutions which are robust and can cross the reality gap. However, we do not yet have methods for analyzing and understanding the impact of the varying morphological conditions which impact the evolutionary process, and therefore for choosing suitable variation ranges. By morphological conditions, we refer to the starting state of the robot, and to variations in its sensor readings during operation due to noise. In this paper, we introduce a method that permits us to measure the impact of these morphological variations and we analyze the relation between the amplitude of variations, the modality with which they are introduced, and the performance and robustness of evolving agents. Our results demonstrate that (i) the evolutionary algorithm can tolerate morphological variations which have a very high impact, (ii) variations affecting the actions of the agent are tolerated much better than variations affecting the initial state of the agent or of the environment, and (iii) improving the accuracy of the fitness measure through multiple evaluations is not always useful. Moreover, our results show that morphological variations permit generating solutions which perform better both in varying and non-varying conditions.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"125-142"},"PeriodicalIF":6.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9726876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing Robot Controller Optimization Methods on Evolvable Morphologies.","authors":"Fuda van Diggelen, Eliseo Ferrante, A E Eiben","doi":"10.1162/evco_a_00334","DOIUrl":"10.1162/evco_a_00334","url":null,"abstract":"<p><p>In this paper, we compare Bayesian Optimization, Differential Evolution, and an Evolution Strategy employed as a gait-learning algorithm in modular robots. The motivational scenario is the joint evolution of morphologies and controllers, where \"newborn\" robots also undergo a learning process to optimize their inherited controllers (without changing their bodies). This context raises the question: How do gait-learning algorithms compare when applied to various morphologies that are not known in advance (and thus need to be treated as without priors)? To answer this question, we use a test suite of twenty different robot morphologies to evaluate our gait-learners and compare their efficiency, efficacy, and sensitivity to morphological differences. The results indicate that Bayesian Optimization and Differential Evolution deliver the same solution quality (walking speed for the robot) with fewer evaluations than the Evolution Strategy. Furthermore, the Evolution Strategy is more sensitive for morphological differences (its efficacy varies more between different morphologies) and is more subject to luck (repeated runs on the same morphology show greater variance in the outcomes).</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"105-124"},"PeriodicalIF":6.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9541798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Marrero, Eduardo Segredo, Coromoto León, Emma Hart
{"title":"Synthesising Diverse and Discriminatory Sets of Instances using Novelty Search in Combinatorial Domains.","authors":"Alejandro Marrero, Eduardo Segredo, Coromoto León, Emma Hart","doi":"10.1162/evco_a_00350","DOIUrl":"https://doi.org/10.1162/evco_a_00350","url":null,"abstract":"<p><p>Gathering sufficient instance data to either train algorithm-selection models or understand algorithm footprints within an instance space can be challenging. We propose an approach to generating synthetic instances that are tailored to perform well with respect to a target algorithm belonging to a predefined portfolio but are also diverse with respect to their features. Our approach uses a novelty search algorithm with a linearly weighted fitness function that balances novelty and performance to generate a large set of diverse and discriminatory instances in a single run of the algorithm. We consider two definitions of novelty: (1) with respect to discriminatory performance within a portfolio of solvers; (2) with respect to the features of the evolved instances. We evaluate the proposed method with respect to its ability to generate diverse and discriminatory instances in two domains (knapsack and bin-packing), comparing to another well-known quality diversity method, Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) and an evolutionary algorithm that only evolves for discriminatory behaviour. The results demonstrate that the novelty search method outperforms its competitors in terms of coverage of the space and its ability to generate instances that are diverse regarding the relative size of the \"performance gap\" between the target solver and the remaining solvers in the portfolio. Moreover, for the Knapsack domain, we also show that we are able to generate novel instances in regions of an instance space not covered by existing benchmarks using a portfolio of state-of-the-art solvers. Finally, we demonstrate that the method is robust to different portfolios of solvers (stochastic approaches, deterministic heuristics and state-of-the-art methods), thereby providing further evidence of its generality.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-41"},"PeriodicalIF":6.8,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isidro M Alvarez, Trung B Nguyen, Will N Browne, Mengjie Zhang
{"title":"A Layered Learning Approach to Scaling in Learning Classifier Systems for Boolean Problems.","authors":"Isidro M Alvarez, Trung B Nguyen, Will N Browne, Mengjie Zhang","doi":"10.1162/evco_a_00351","DOIUrl":"https://doi.org/10.1162/evco_a_00351","url":null,"abstract":"<p><p>Evolutionary Computation (EC) often throws away learned knowledge as it is reset for each new problem addressed. Conversely, humans can learn from small-scale problems, retain this knowledge (plus functionality) and then successfully reuse them in larger-scale and/or related problems. Linking solutions to problems together has been achieved through layered learning, where an experimenter sets a series of simpler related problems to solve a more complex task. Recent works on Learning Classifier Systems (LCSs) has shown that knowledge reuse through the adoption of Code Fragments, GP-like tree-based programs, is plausible. However, random reuse is inefficient. Thus, the research question is how LCS can adopt a layered-learning framework, such that increasingly complex problems can be solved efficiently? An LCS (named XCSCF*) has been developed to include the required base axioms necessary for learning, refined methods for transfer learning and learning recast as a decomposition into a series of subordinate problems. These subordinate problems can be set as a curriculum by a teacher, but this does not mean that an agent can learn from it. Especially if it only extracts over-fitted knowledge of each problem rather than the underlying scalable patterns and functions. Results show that from a conventional tabula rasa, with only a vague notion of what subordinate problems might be relevant, XCSCF* captures the general logic behind the tested domains and therefore can solve any n-bit Multiplexer, n-bit Carry-one, n-bit Majority-on, and n-bit Even-parity problems. This work demonstrates a step towards continual learning as learned knowledge is effectively reused in subsequent problems.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-25"},"PeriodicalIF":6.8,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Kaufmann, Maxime Larcher, Johannes Lengler, Xun Zou
{"title":"OneMax is not the Easiest Function for Fitness Improvements.","authors":"Marc Kaufmann, Maxime Larcher, Johannes Lengler, Xun Zou","doi":"10.1162/evco_a_00348","DOIUrl":"https://doi.org/10.1162/evco_a_00348","url":null,"abstract":"<p><p>We study the (1:s+1) success rule for controlling the population size of the (1,λ)- EA. It was shown by Hevia Fajardo and Sudholt that this parameter control mechanism can run into problems for large s if the fitness landscape is too easy. They conjectured that this problem is worst for the ONEMAX benchmark, since in some well-established sense ONEMAX is known to be the easiest fitness landscape. In this paper we disprove this conjecture. We show that there exist s and ɛ such that the self-adjusting (1,λ)-EA with the (1:s+1)-rule optimizes ONEMAX efficiently when started with ɛn zero-bits, but does not find the optimum in polynomial time on DYNAMIC BINVAL. Hence, we show that there are landscapes where the problem of the (1:s+1)-rule for controlling the population size of the (1,λ)-EA is more severe than for ONEMAX. The key insight is that, while ONEMAX is the easiest function for decreasing the distance to the optimum, it is not the easiest fitness landscape with respect to finding fitness-improving steps.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-30"},"PeriodicalIF":6.8,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140295208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Drift Analysis with Fitness Levels for Elitist Evolutionary Algorithms.","authors":"Jun He, Yuren Zhou","doi":"10.1162/evco_a_00349","DOIUrl":"https://doi.org/10.1162/evco_a_00349","url":null,"abstract":"<p><p>The fitness level method is a popular tool for analyzing the hitting time of elitist evolutionary algorithms. Its idea is to divide the search space into multiple fitness levels and estimate lower and upper bounds on the hitting time using transition probabilities between fitness levels. However, the lower bound generated by this method is often loose. An open question regarding the fitness level method is what are the tightest lower and upper time bounds that can be constructed based on transition probabilities between fitness levels. To answer this question, we combine drift analysis with fitness levels and define the tightest bound problem as a constrained multi-objective optimization problem subject to fitness levels. The tightest metric bounds by fitness levels are constructed and proven for the first time. Then linear bounds are derived from metric bounds and a framework is established that can be used to develop different fitness level methods for different types of linear bounds. The framework is generic and promising, as it can be used to draw tight time bounds on both fitness landscapes with and without shortcuts. This is demonstrated in the example of the (1+1) EA maximizing the TwoMax1 function.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-25"},"PeriodicalIF":6.8,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140295207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial for the Special Issue on Reproducibility.","authors":"Manuel López-Ibáñez, Luís Paquete, Mike Preuss","doi":"10.1162/evco_e_00344","DOIUrl":"10.1162/evco_e_00344","url":null,"abstract":"","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":"32 1","pages":"1-2"},"PeriodicalIF":6.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raúl Martín-Santamaría, Sergio Cavero, Alberto Herrán, Abraham Duarte, J Manuel Colmenar
{"title":"A Practical Methodology for Reproducible Experimentation: An Application to the Double-Row Facility Layout Problem.","authors":"Raúl Martín-Santamaría, Sergio Cavero, Alberto Herrán, Abraham Duarte, J Manuel Colmenar","doi":"10.1162/evco_a_00317","DOIUrl":"10.1162/evco_a_00317","url":null,"abstract":"<p><p>Reproducibility of experiments is a complex task in stochastic methods such as evolutionary algorithms or metaheuristics in general. Many works from the literature give general guidelines to favor reproducibility. However, none of them provide both a practical set of steps or software tools to help in this process. In this article, we propose a practical methodology to favor reproducibility in optimization problems tackled with stochastic methods. This methodology is divided into three main steps, where the researcher is assisted by software tools which implement state-of-the-art techniques related to this process. The methodology has been applied to study the double-row facility layout problem (DRFLP) where we propose a new algorithm able to obtain better results than the state-of-the-art methods. To this aim, we have also replicated the previous methods in order to complete the study with a new set of larger instances. All the produced artifacts related to the methodology and the study of the target problem are available in Zenodo.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"69-104"},"PeriodicalIF":6.8,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40695126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}