Mohammed Ghaith Altarabichi, Sławomir Nowaczyk, Sepideh Pashami, P. Mashhadi
{"title":"Fast Genetic Algorithm for feature selection — A qualitative approximation approach","authors":"Mohammed Ghaith Altarabichi, Sławomir Nowaczyk, Sepideh Pashami, P. Mashhadi","doi":"10.1145/3583133.3595823","DOIUrl":"https://doi.org/10.1145/3583133.3595823","url":null,"abstract":"We propose a two-stage surrogate-assisted evolutionary approach to address the computational issues arising from using Genetic Algorithm (GA) for feature selection in a wrapper setting for large datasets. The proposed approach involves constructing a lightweight qualitative meta-model by sub-sampling data instances and then using this meta-model to carry out the feature selection task. We define \"Approximation Usefulness\" to capture the necessary conditions that allow the meta-model to lead the evolutionary computations to the correct maximum of the fitness function. Based on our procedure we create CHCQX a Qualitative approXimations variant of the GA-based algorithm CHC (Cross generational elitist selection, Heterogeneous recombination and Cataclysmic mutation). We show that CHCQX converges faster to feature subset solutions of significantly higher accuracy, particularly for large datasets with over 100K instances. We also demonstrate the applicability of our approach to Swarm Intelligence (SI), with results of PSOQX, a qualitative approximation adaptation of the Particle Swarm Optimization (PSO) method. A GitHub repository with the complete implementation is available2. This paper for the Hot-off-the-Press track at GECCO 2023 summarizes the original work published at [3].","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127778912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Efficient QAOA via a Polynomial QPU-Needless Approach","authors":"F. Chicano, Z. Dahi, Gabriel Luque","doi":"10.1145/3583133.3596409","DOIUrl":"https://doi.org/10.1145/3583133.3596409","url":null,"abstract":"The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum algorithm described as ansatzes that represent both the problem and the mixer Hamiltonians. Both are parameterizable unitary transformations executed on a quantum machine/simulator and whose parameters are iteratively optimized using a classical device to optimize the problem's expectation value. To do so, in each QAOA iteration, most of the literature uses a quantum machine/simulator to measure the QAOA outcomes. However, this poses a severe bottleneck considering that quantum machines are hardly constrained (e.g. long queuing, limited qubits, etc.), likewise, quantum simulation also induces exponentially-increasing memory usage when dealing with large problems requiring more qubits. These limitations make today's QAOA implementation impractical since it is hard to obtain good solutions with a reasonably-acceptable time/resources. Considering these facts, this work presents a new approach with two main contributions, including (I) removing the need for accessing quantum devices or large-sized classical machines during the QAOA optimization phase, and (II) ensuring that when dealing with some k-bounded pseudo-Boolean problems, optimizing the exact problem's expectation value can be done in polynomial time using a classical computer.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126430211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Helmuth, James Gunder Frazier, Yu-me Shi, A. Abdelrehim
{"title":"Human-Driven Genetic Programming for Program Synthesis: A Prototype","authors":"Thomas Helmuth, James Gunder Frazier, Yu-me Shi, A. Abdelrehim","doi":"10.1145/3583133.3596373","DOIUrl":"https://doi.org/10.1145/3583133.3596373","url":null,"abstract":"End users can benefit from automatic program synthesis in a variety of applications, many of which require the user to specify the program they would like to generate. Recent advances in genetic programming allow it to generate general purpose programs similar to those humans write, but require specifications in the form of extensive, labeled training data, a barrier to using it for user-driven synthesis. Here we describe the prototype of a human-driven genetic programming system that can be used to synthesize programs from scratch. In order to address the issue of extensive training data, we draw inspiration from counterexample-driven genetic programming, allowing the user to initially provide only a few training cases and asking the user to verify the correctness of potential solutions on automatically generated potential counterexample cases. We present anecdotal experiments showing that our prototype can solve a variety of easy program synthesis problems entirely based on user input.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125630351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Christou, C. Christodoulou, Vassilis Vassiliades
{"title":"Quality Diversity optimization using the IsoLineDD operator: forward and backward directions are equally important","authors":"Konstantinos Christou, C. Christodoulou, Vassilis Vassiliades","doi":"10.1145/3583133.3590737","DOIUrl":"https://doi.org/10.1145/3583133.3590737","url":null,"abstract":"Quality Diversity (QD) optimization aims at returning a diverse collection of high quality solutions in a single run. Prior work indicated that the optimized collection concentrates in a subspace of the genotype space called the Elite Hypervolume. This suggested that if the Elite Hypervolume is convex, perturbing a solution (elite) towards the direction of another elite would create a solution inside the subspace. To accelerate QD optimization, the IsoLineDD operator was proposed which perturbs a solution along the line that connects it with another elite, both in the forward and the backward direction (i.e., towards and away from another elite), with equal probability. In this work, we hypothesize that the backward direction is mostly useful for exploration, at the beginning of QD optimization, rather than exploitation. To validate our hypothesis, we create a dynamic IsoLineDD operator that modifies its probability of creating solutions in the forward and backward directions, respectively. Our experiments and analysis in the robotic arm repertoire and Rastrigin problems invalidate this hypothesis by demonstrating that the forward and backward directions of vanilla IsoLineDD equally contribute to the optimization of the collection.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134206889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tristan Marty, Y. Semet, A. Auger, Sébastien Héron, N. Hansen
{"title":"Benchmarking CMA-ES with Basic Integer Handling on a Mixed-Integer Test Problem Suite","authors":"Tristan Marty, Y. Semet, A. Auger, Sébastien Héron, N. Hansen","doi":"10.1145/3583133.3596411","DOIUrl":"https://doi.org/10.1145/3583133.3596411","url":null,"abstract":"We compare the performances of one implementation of CMA-ES (pycma version 3.3.0) for optimizing functions with both continuous and integer variables. The implementation incorporates a lower bound on the variance along the integer coordinates to keep the optimization from stalling. This benchmark will serve as a baseline for further works on pycma. Results show substantial improvement since the last benchmarked version of pycma. Also this implementation is competitive with other mixed integer algorithms.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134217965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Analysis of Self-Supervised Strategies for Standard Genetic Programming","authors":"Nuno M. Rodrigues, J. Almeida, Sara Silva","doi":"10.1145/3583133.3590748","DOIUrl":"https://doi.org/10.1145/3583133.3590748","url":null,"abstract":"Self-supervised learning (SSL) methods have been widely used to train deep learning models for computer vision and natural language processing domains. They leverage large amounts of unlabeled data to help pretrain models by learning patterns implicit in the data. Recently, new SSL techniques for tabular data have been developed, using new pretext tasks that typically aim to reconstruct a corrupted input sample and yielding models which are, ideally, robust feature transforms. In this paper, we pose the research question of whether genetic programming is capable of leveraging data processed using SSL methods to improve its performance. We test this hypothesis by assuming different amounts of labeled data on seven different datasets (five OpenML benchmarking datasets and two real-world datasets). The obtained results show that in almost all problems, standard genetic programming is not able to capitalize on the learned representations, producing results equal to or worse than using the labeled partitions.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133955421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gloria Pietropolli, Federico Julian Camerota Verdù, L. Manzoni, M. Castelli
{"title":"Parametrizing GP Trees for Better Symbolic Regression Performance through Gradient Descent.","authors":"Gloria Pietropolli, Federico Julian Camerota Verdù, L. Manzoni, M. Castelli","doi":"10.1145/3583133.3590574","DOIUrl":"https://doi.org/10.1145/3583133.3590574","url":null,"abstract":"Symbolic regression is a common problem in genetic programming (GP), but the syntactic search carried out by the standard GP algorithm often struggles to tune the learned expressions. On the other hand, gradient-based optimizers can efficiently tune parametric functions by exploring the search space locally. While there is a large amount of research on the combination of evolutionary algorithms and local search (LS) strategies, few of these studies deal with GP. To get the best from both worlds, we propose embedding learnable parameters in GP programs and combining the standard GP evolutionary approach with a gradient-based refinement of the individuals employing the Adam optimizer. We devise two different algorithms that differ in how these parameters are shared in the expression operators and report experimental results performed on a set of standard real-life application datasets. Our findings show that the proposed gradient-based LS approach can be effectively combined with GP to outperform the original algorithm.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132899951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploratory Landscape Analysis Based Parameter Control","authors":"M. Pikalov, Aleksei Pismerov","doi":"10.1145/3583133.3596364","DOIUrl":"https://doi.org/10.1145/3583133.3596364","url":null,"abstract":"Parameter tuning in evolutionary algorithms is a very important topic, as the correct choice of parameters greatly affects their performance. Fitness landscape analysis can help identify similar problems and allow for gathering problem structure insights for fitness-aware optimization algorithm parameter choice. In this paper, we present an approach to an automatic dynamic parameter control method that uses exploratory landscape analysis and machine learning. Using a dataset of optimal parameter values we collected on different instances of W-model benchmark problem, we trained a machine learning model capable of suggesting parameter values for the (1 + (λ, λ)) genetic algorithm. The results of our experiments show that the machine learning model is able to capture important landscape features and recommend algorithm parameters based on this information. The comparison results with other tuning methods suggest this approach is more effective than static tuning or heuristics-based dynamic parameter control.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132395135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Metaheuristic-Algorithm Similarity Measure Using Signal Flow Diagrams","authors":"T. Achary, A. Pillay, E. Jembere","doi":"10.1145/3583133.3590692","DOIUrl":"https://doi.org/10.1145/3583133.3590692","url":null,"abstract":"The component-based view for metaheuristic research promotes the identification of structural components of metaheuristics and metaheuristic-algorithms for analysis. In this study, we propose a method for measuring similarity between metaheuristic-algorithms. The method is based on a modified version of a signal flow representation of metaheuristic-algorithms that is aligned with the component-based view. The method takes any two metaheuristic-algorithms and decomposes them into their heuristic components whilst taking note of the order of execution of the heuristics. Features of the heuristics are then extracted, and finally a feature-based similarity calculation, that also considers the position of the heuristics, is performed to obtain an overall similarity score between the two metaheuristic-algorithms. The method incorporates more structural information in the similarity calculation than previous component-wise similarity measures and can be extended to cover a comprehensive set of metaheuristic components.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133191458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Šenkeřík, T. Kadavy, Peter Janků, Michal Pluhacek, Hubert Guzowski, L. Pekař, R. Matušů, Adam Viktorin, M. Smółka, A. Byrski, Zuzana Kominkova Oplatkova
{"title":"Maximizing Efficiency: A Comparative Study of SOMA Algorithm Variants and Constraint Handling Methods for Time Delay System Optimization","authors":"R. Šenkeřík, T. Kadavy, Peter Janků, Michal Pluhacek, Hubert Guzowski, L. Pekař, R. Matušů, Adam Viktorin, M. Smółka, A. Byrski, Zuzana Kominkova Oplatkova","doi":"10.1145/3583133.3596417","DOIUrl":"https://doi.org/10.1145/3583133.3596417","url":null,"abstract":"This paper presents an experimental study that compares four adaptive variants of the self-organizing migrating algorithm (SOMA). Each variant uses three different constraint handling methods for the optimization of a time delay system model. The paper emphasizes the importance of metaheuristic algorithms in control engineering for time-delayed systems to develop more effective and efficient control strategies and precise model identifications. The study includes a detailed description of the selected variants of the SOMA and the adaptive mechanisms used. A complex workflow of experiments is described, and the results and discussion are presented. The experimental results highlight the effectiveness of the SOMA variants with specific constraint handling methods for time delay system optimization. Overall, this study contributes to the understanding of the challenges and advantages of using metaheuristic algorithms in control engineering for time delay systems. The results provide valuable insights into the performance of the SOMA variants and can help guide the selection of appropriate constraint handling methods and the adaptive mechanisms of metaheuristics.","PeriodicalId":422029,"journal":{"name":"Proceedings of the Companion Conference on Genetic and Evolutionary Computation","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127631534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}