{"title":"Transformation-Interaction-Rational Representation for Symbolic Regression: A Detailed Analysis of SRBench Results","authors":"F. O. de França","doi":"10.1145/3597312","DOIUrl":"https://doi.org/10.1145/3597312","url":null,"abstract":"Symbolic Regression searches for a parametric model with the optimal value of the parameters that best fits a set of samples to a measured target. The desired solution has a balance between accuracy and interpretability. Commonly, there is no constraint in the way the functions are composed in the expression or where the numerical parameters are placed, which can potentially lead to expressions that require a nonlinear optimization to find the optimal parameters. The representation called Interaction-Transformation alleviates this problem by describing expressions as a linear regression of the composition of functions applied to the interaction of the variables. One advantage is that any model that follows this representation is linear in its parameters, allowing an efficient computation. More recently, this representation was extended by applying a univariate function to the rational function of two Interaction-Transformation expressions, called Transformation-Interaction-Rational (TIR). The use of this representation was shown to be competitive with the current literature of Symbolic Regression. In this article, we make a detailed analysis of these results using the SRBench benchmark. For this purpose, we split the datasets into different categories to understand the algorithm behavior in different settings. We also test the use of nonlinear optimization to adjust the numerical parameters instead of Ordinary Least Squares. We find through the experiments that TIR has some difficulties handling high-dimensional and noisy datasets, especially when most of the variables are composed of random noise. These results point to new directions for improving the evolutionary search of TIR expressions.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114588327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atsuhiro Miyagi, Yoshiki Miyauchi, A. Maki, Kazuto Fukuchi, J. Sakuma, Youhei Akimoto
{"title":"Covariance Matrix Adaptation Evolutionary Strategy with Worst-Case Ranking Approximation for Min–Max Optimization and Its Application to Berthing Control Tasks","authors":"Atsuhiro Miyagi, Yoshiki Miyauchi, A. Maki, Kazuto Fukuchi, J. Sakuma, Youhei Akimoto","doi":"10.1145/3603716","DOIUrl":"https://doi.org/10.1145/3603716","url":null,"abstract":"In this study, we consider a continuous min–max optimization problem minx ∈ 𝕏 maxy ∈ 𝕐 f(x, y) whose objective function is a black-box. We propose a novel approach to minimize the worst-case objective function F(x) = maxy ∈ 𝕐 f(x, y) directly using a covariance matrix adaptation evolution strategy in which the rankings of solution candidates are approximated by our proposed worst-case ranking approximation mechanism. We develop two variants of worst-case ranking approximation combined with a covariance matrix adaptation evolution strategy and approximate gradient ascent as numerical solvers for the inner maximization problem. Numerical experiments show that our proposed approach outperforms several existing approaches when the objective function is a smooth strongly convex–concave function and the interaction between x and y is strong. We investigate the advantages of the proposed approach for problems where the objective function is not limited to smooth strongly convex–concave functions. The effectiveness of the proposed approach is demonstrated in the robust berthing control problem with uncertainty.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128085296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Pigozzi, Eric Medvet, Alberto Bartoli, Marco Rochelli
{"title":"Factors Impacting Diversity and Effectiveness of Evolved Modular Robots","authors":"F. Pigozzi, Eric Medvet, Alberto Bartoli, Marco Rochelli","doi":"10.1145/3587101","DOIUrl":"https://doi.org/10.1145/3587101","url":null,"abstract":"In many natural environments, different forms of living organisms successfully accomplish the same task while being diverse in shape and behavior. This biodiversity is what made life capable of adapting to disrupting changes. Being able to reproduce biodiversity in artificial agents, while still optimizing them for a particular task, might increase their applicability to scenarios where human response to unexpected changes is not possible. In this work, we focus on Voxel-based Soft Robots (VSRs), a form of robots that grants great freedom in the design of both morphology and controller and is hence promising in terms of biodiversity. We use evolutionary computation for optimizing, at the same time, morphology and controller of VSRs for the task of locomotion. We investigate experimentally whether three key factors—representation, Evolutionary Algorithm (EA), and environment—impact the emergence of biodiversity and if this occurs at the expense of effectiveness. We devise an automatic machine learning pipeline for systematically characterizing the morphology and behavior of robots resulting from the optimization process. We classify the robots into species and then measure biodiversity in populations of robots evolved in a multitude of conditions resulting from the combination of different morphology representations, controller representations, EAs, and environments. The experimental results suggest that, in general, EA and environment matter more than representation. We also propose a novel EA based on a speciation mechanism that operates on morphology and behavior descriptors and we show that it allows to jointly evolve morphology and controller of effective and diverse VSRs.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117117507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Generation of Visually Credible Adversarial Examples with Genetic Algorithms","authors":"James R. Bradley, A. P. Blossom","doi":"10.1145/3582276","DOIUrl":"https://doi.org/10.1145/3582276","url":null,"abstract":"An adversarial example is an input that a neural network misclassifies although the input differs only slightly from an input that the network classifies correctly. Adversarial examples are used to augment neural network training data, measure the vulnerability of neural networks, and provide intuitive interpretations of neural network output that humans can understand. Although adversarial examples are defined in the literature as similar to authentic input from the perspective of humans, the literature measures similarity with mathematical norms that are not scientifically correlated with human perception. Our main contributions are to construct a genetic algorithm (GA) that generates adversarial examples more similar to authentic input than do existing methods and to demonstrate with a survey that humans perceive those adversarial examples to have greater visual similarity than existing methods. The GA incorporates a neural network, and we test many parameter sets to determine which fitness function, selection operator, mutation operator, and neural network generate adversarial examples most visually similar to authentic input. We establish which mathematical norms are most correlated with human perception, which permits future research to incorporate the human perspective without testing many norms or conducting intensive surveys with human subjects. We also document a tradeoff between speed and quality in adversarial examples generated by GAs and existing methods. Although existing adversarial methods are faster, a GA provides higher-quality adversarial examples in terms of visual similarity and feasibility of adversarial examples. We apply the GA to the Modified National Institute of Standards and Technology (MNIST) and Canadian Institute for Advanced Research (CIFAR-10) datasets.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122435325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Explainable Regression Via Prototypes","authors":"Renato Miranda Filho, A. Lacerda, G. Pappa","doi":"10.1145/3576903","DOIUrl":"https://doi.org/10.1145/3576903","url":null,"abstract":"Model interpretability/explainability is increasingly a concern when applying machine learning to real-world problems. In this article, we are interested in explaining regression models by exploiting prototypes, which are exemplar cases in the problem domain. Previous works focused on finding prototypes that are representative of all training data but ignore the model predictions, i.e., they explain the data distribution but not necessarily the predictions. We propose a two-level model-agnostic method that considers prototypes to provide global and local explanations for regression problems and that account for both the input features and the model output. M-PEER (Multiobjective Prototype-basEd Explanation for Regression) is based on a multi-objective evolutionary method that optimizes both the error of the explainable model and two other “semantics”-based measures of interpretability adapted from the context of classification, namely, model fidelity and stability. We compare the proposed method with the state-of-the-art method based on prototypes for explanation—ProtoDash—and with other methods widely used in correlated areas of machine learning, such as instance selection and clustering. We conduct experiments on 25 datasets, and results demonstrate significant gains of M-PEER over other strategies, with an average of 12% improvement in the proposed metrics (i.e., model fidelity and stability) and 17% in root mean squared error (RMSE) when compared to ProtoDash.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129034001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul-Antoine Le Tolguenec, E. Rachelson, Y. Besse, Dennis G. Wilson
{"title":"Curiosity Creates Diversity in Policy Search","authors":"Paul-Antoine Le Tolguenec, E. Rachelson, Y. Besse, Dennis G. Wilson","doi":"10.1145/3605782","DOIUrl":"https://doi.org/10.1145/3605782","url":null,"abstract":"When searching for policies, reward-sparse environments often lack sufficient information about which behaviors to improve upon or avoid. In such environments, the policy search process is bound to blindly search for reward-yielding transitions and no early reward can bias this search in one direction or another. A way to overcome this is to use intrinsic motivation in order to explore new transitions until a reward is found. In this work, we use a recently proposed definition of intrinsic motivation, Curiosity, in an evolutionary policy search method. We propose Curiosity-ES,1 an evolutionary strategy adapted to use Curiosity as a fitness metric. We compare Curiosity-ES with other evolutionary algorithms intended for exploration, as well as with Curiosity-based reinforcement learning, and find that Curiosity-ES can generate higher diversity without the need for an explicit diversity criterion and leads to more policies which find reward.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133646030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Empirical analysis of PGA-MAP-Elites for Neuroevolution in Uncertain Domains","authors":"Manon Flageat, Félix Chalumeau, Antoine Cully","doi":"10.1145/3577203","DOIUrl":"https://doi.org/10.1145/3577203","url":null,"abstract":"Quality-Diversity algorithms, among which are the Multi-dimensional Archive of Phenotypic Elites (MAP-Elites), have emerged as powerful alternatives to performance-only optimisation approaches as they enable generating collections of diverse and high-performing solutions to an optimisation problem. However, they are often limited to low-dimensional search spaces and deterministic environments. The recently introduced Policy Gradient Assisted MAP-Elites (PGA-MAP-Elites) algorithm overcomes this limitation by pairing the traditional Genetic operator of MAP-Elites with a gradient-based operator inspired by deep reinforcement learning. This new operator guides mutations toward high-performing solutions using policy gradients (PG). In this work, we propose an in-depth study of PGA-MAP-Elites. We demonstrate the benefits of PG on the performance of the algorithm and the reproducibility of the generated solutions when considering uncertain domains. We firstly prove that PGA-MAP-Elites is highly performant in both deterministic and uncertain high-dimensional environments, decorrelating the two challenges it tackles. Secondly, we show that in addition to outperforming all the considered baselines, the collections of solutions generated by PGA-MAP-Elites are highly reproducible in uncertain environments, approaching the reproducibility of solutions found by Quality-Diversity approaches built specifically for uncertain applications. Finally, we propose an ablation and in-depth analysis of the dynamic of the PG-based variation. We demonstrate that the PG variation operator is determinant to guarantee the performance of PGA-MAP-Elites but is only essential during the early stage of the process, where it finds high-performing regions of the search space.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121276755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxime Allard, Simón C. Smith, Konstantinos Chatzilygeroudis, Bryan Lim, Antoine Cully
{"title":"Online Damage Recovery for Physical Robots with Hierarchical Quality-Diversity","authors":"Maxime Allard, Simón C. Smith, Konstantinos Chatzilygeroudis, Bryan Lim, Antoine Cully","doi":"10.1145/3596912","DOIUrl":"https://doi.org/10.1145/3596912","url":null,"abstract":"In real-world environments, robots need to be resilient to damages and robust to unforeseen scenarios. Quality-Diversity (QD) algorithms have been successfully used to make robots adapt to damages in seconds by leveraging a diverse set of learned skills. A high diversity of skills increases the chances of a robot to succeed at overcoming new situations since there are more potential alternatives to solve a new task. However, finding and storing a large behavioural diversity of multiple skills often leads to an increase in computational complexity. Furthermore, robot planning in a large skill space is an additional challenge that arises with an increased number of skills. Hierarchical structures can help to reduce this search and storage complexity by breaking down skills into primitive skills. In this article, we extend the analysis of the Hierarchical Trial and Error algorithm, which uses a hierarchical behavioural repertoire to learn diverse skills and leverages them to make the robot adapt quickly in the physical world. We show that the hierarchical decomposition of skills enables the robot to learn more complex behaviours while keeping the learning of the repertoire tractable. Experiments with a hexapod robot both in simulation and the physical world show that our method solves a maze navigation task with up to, respectively, 20% and 43% less actions than the best baselines while having 78% less complete failures.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"429 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132200307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Theoretical and Empirical Analysis of Parameter Control Mechanisms in the (1 + (λ, λ)) Genetic Algorithm","authors":"Mario Alejandro Hevia Fajardo, Dirk Sudholt","doi":"10.1145/3564755","DOIUrl":"https://doi.org/10.1145/3564755","url":null,"abstract":"The self-adjusting (1 + (λ, λ)) GA is the best known genetic algorithm for problems with a good fitness-distance correlation as in OneMax. It uses a parameter control mechanism for the parameter λ that governs the mutation strength and the number of offspring. However, on multimodal problems, the parameter control mechanism tends to increase λ uncontrollably. We study this problem for the standard Jumpk benchmark problem class using runtime analysis. The self-adjusting (1 + (λ, λ)) GA behaves like a (1 + n) EA whenever the maximum value for λ is reached. This is ineffective for problems where large jumps are required. Capping λ at smaller values is beneficial for such problems. Finally, resetting λ to 1 allows the parameter to cycle through the parameter space. We show that resets are effective for all Jumpk problems: the self-adjusting (1 + (λ, λ)) GA performs as well as the (1 + 1) EA with the optimal mutation rate and evolutionary algorithms with heavy-tailed mutation, apart from a small polynomial overhead. Along the way, we present new general methods for translating existing runtime bounds from the (1 + 1) EA to the self-adjusting (1 + (λ, λ)) GA. We also show that the algorithm presents a bimodal parameter landscape with respect to λ on Jumpk. For appropriate n and k, the landscape features a local optimum in a wide basin of attraction and a global optimum in a narrow basin of attraction. To our knowledge this is the first proof of a bimodal parameter landscape for the runtime of an evolutionary algorithm on a multimodal problem.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"86 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132478751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-donor Neural Transfer Learning for Genetic Programming","authors":"A. Wild, Barry Porter","doi":"10.1145/3563043","DOIUrl":"https://doi.org/10.1145/3563043","url":null,"abstract":"Genetic programming (GP), for the synthesis of brand new programs, continues to demonstrate increasingly capable results towards increasingly complex problems. A key challenge in GP is how to learn from the past so that the successful synthesis of simple programs can feed into more challenging unsolved problems. Transfer Learning (TL) in the literature has yet to demonstrate an automated mechanism to identify existing donor programs with high-utility genetic material for new problems, instead relying on human guidance. In this article we present a transfer learning mechanism for GP which fills this gap: we use a Turing-complete language for synthesis, and demonstrate how a neural network (NN) can be used to guide automated code fragment extraction from previously solved problems for injection into future problems. Using a framework which synthesises code from just 10 input-output examples, we first study NN ability to recognise the presence of code fragments in a larger program, then present an end-to-end system which takes only input-output examples and generates code fragments as it solves easier problems, then deploys selected high-utility fragments to solve harder ones. The use of NN-guided genetic material selection shows significant performance increases, on average doubling the percentage of programs that can be successfully synthesised when tested on two different problem corpora, compared with a non-transfer-learning GP baseline.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121791455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}