{"title":"Evolutionary Seeding of Diverse Structural Design Solutions via Topology Optimization","authors":"Yue Xie, Josh Pinskier, Xing Wang, David Howard","doi":"10.1145/3670693","DOIUrl":"https://doi.org/10.1145/3670693","url":null,"abstract":"Topology optimization is a powerful design tool in structural engineering and other engineering problems. The design domain is discretized into elements, and a finite element method model is iteratively solved to find the element that maximizes the structure's performance. Although gradient-based solvers have been used to solve topology optimization problems, they may be susceptible to suboptimal solutions or difficulty obtaining feasible solutions, particularly in non-convex optimization problems. The presence of non-convexities can hinder convergence, leading to challenges in achieving the global optimum. With this in mind, we discuss in this paper the application of the quality diversity approach to topological optimization problems. Quality diversity (QD) algorithms have shown promise in the research field of optimization and have many applications in engineering design, robotics, and games. MAP-Elites is a popular QD algorithm used in robotics. In soft robotics, the MAP-Elites algorithm has been used to optimize the shape and control of soft robots, leading to the discovery of new and efficient motion strategies. This paper introduces an approach based on MAP-Elites to provide diverse designs for structural optimization problems. Three fundamental topology optimization problems are used for experimental testing, and the results demonstrate the ability of the proposed algorithm to generate diverse, high-performance designs for those problems. Furthermore, the proposed algorithm can be a valuable engineering design tool capable of creating novel and efficient designs.","PeriodicalId":497392,"journal":{"name":"ACM transactions on evolutionary learning","volume":"52 42","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141384171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Influence of Noise on Multi-Parent Crossover for an Island Model Genetic Algorithm","authors":"Brahim Aboutaib, Andrew M. Sutton","doi":"10.1145/3630638","DOIUrl":"https://doi.org/10.1145/3630638","url":null,"abstract":"Many optimization problems tackled by evolutionary algorithms are not only computationally expensive, but also complicated with one or more sources of noise. One technique to deal with high computational overhead is parallelization. However, though the existing literature gives good insights about the expected behavior of parallelized evolutionary algorithms, we still lack an understanding of their performance in the presence of noise. This paper considers how parallelization might be leveraged together with multi-parent crossover in order to handle noisy problems. We present a rigorous running time analysis of an island model with weakly connected topology tasked with hill climbing in the presence of general additive noise (i.e., noisy OneMax ). Our proofs yield insights into the relationship between the noise intensity and number of required parents. We translate this into positive and negative results for two kinds of multi-parent crossover operators. We then empirically analyze and extend this framework to investigate the trade-offs between noise impact, optimization time, and limits of computation power to deal with noise.","PeriodicalId":497392,"journal":{"name":"ACM transactions on evolutionary learning","volume":" 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135241918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model-based Gradient Search for Permutation Problems","authors":"Josu Ceberio, Valentino Santucci","doi":"10.1145/3628605","DOIUrl":"https://doi.org/10.1145/3628605","url":null,"abstract":"Global random search algorithms are characterized by using probability distributions to optimize problems. Among them, generative methods iteratively update the distributions by using the observations sampled. For instance, this is the case of the well-known Estimation of Distribution Algorithms. Although successful, this family of algorithms iteratively adopts numerical methods for estimating the parameters of a model or drawing observations from it. This is often a very time-consuming task, especially in permutation-based combinatorial optimization problems. In this work, we propose using a generative method, under the model-based gradient search framework, to optimize permutation-coded problems and address the mentioned computational overheads. To that end, the Plackett-Luce model is used to define the probability distribution on the search space of permutations. Not limited to that, a parameter-free variant of the algorithm is investigated. Conducted experiments, directed to validate the work, reveal that the gradient search scheme produces better results than other analogous competitors, reducing the computational cost and showing better scalability.","PeriodicalId":497392,"journal":{"name":"ACM transactions on evolutionary learning","volume":"53 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135618558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the Explainable Aspects and Performance of a Learnable Evolutionary Multiobjective Optimization Method","authors":"Giovanni Misitano","doi":"10.1145/3626104","DOIUrl":"https://doi.org/10.1145/3626104","url":null,"abstract":"Multiobjective optimization problems have multiple conflicting objective functions to be optimized simultaneously. The solutions to these problems are known as Pareto optimal solutions, which are mathematically incomparable. Thus, a decision maker must be employed to provide preferences to find the most preferred solution. However, decision makers often lack support in providing preferences and insights in exploring the solutions available. We explore the combination of learnable evolutionary models with interactive indicator-based evolutionary multiobjective optimization to create a learnable evolutionary multiobjective optimization method. Furthermore, we leverage interpretable machine learning to provide decision makers with potential insights about the problem being solved in the form of rule-based explanations. In fact, we show that a learnable evolutionary multiobjective optimization method can offer advantages in the search for solutions to a multiobjective optimization problem. We also provide an open source software framework for other researchers to implement and explore our ideas in their own works. Our work is a step towards establishing a new paradigm in the field on multiobjective optimization: explainable and learnable multiobjective optimization . We take the first steps towards this new research direction and provide other researchers and practitioners with necessary tools and ideas to further contribute to this field.","PeriodicalId":497392,"journal":{"name":"ACM transactions on evolutionary learning","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135385435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial to the “Evolutionary Reinforcement Learning” Special Issue","authors":"Adam Gaier, Giuseppe Paolo, Antoine Cully","doi":"10.1145/3624559","DOIUrl":"https://doi.org/10.1145/3624559","url":null,"abstract":"No abstract available.","PeriodicalId":497392,"journal":{"name":"ACM transactions on evolutionary learning","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134957845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}