ACM Transactions on Evolutionary Learning最新文献

筛选
英文 中文
AutoML Loss Landscapes 自动损失景观
ACM Transactions on Evolutionary Learning Pub Date : 2022-09-02 DOI: 10.1145/3558774
Y. Pushak, H. Hoos
{"title":"AutoML Loss Landscapes","authors":"Y. Pushak, H. Hoos","doi":"10.1145/3558774","DOIUrl":"https://doi.org/10.1145/3558774","url":null,"abstract":"As interest in machine learning and its applications becomes more widespread, how to choose the best models and hyper-parameter settings becomes more important. This problem is known to be challenging for human experts, and consequently, a growing number of methods have been proposed for solving it, giving rise to the area of automated machine learning (AutoML). Many of the most popular AutoML methods are based on Bayesian optimization, which makes only weak assumptions about how modifying hyper-parameters effects the loss of a model. This is a safe assumption that yields robust methods, as the AutoML loss landscapes that relate hyper-parameter settings to loss are poorly understood. We build on recent work on the study of one-dimensional slices of algorithm configuration landscapes by introducing new methods that test n-dimensional landscapes for statistical deviations from uni-modality and convexity, and we use them to show that a diverse set of AutoML loss landscapes are highly structured. We introduce a method for assessing the significance of hyper-parameter partial derivatives, which reveals that most (but not all) AutoML loss landscapes only have a small number of hyper-parameters that interact strongly. To further assess hyper-parameter interactions, we introduce a simplistic optimization procedure that assumes each hyper-parameter can be optimized independently, a single time in sequence, and we show that it obtains configurations that are statistically tied with optimal in all of the n-dimensional AutoML loss landscapes that we studied. Our results suggest many possible new directions for substantially improving the state of the art in AutoML.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124236322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
On the Design of a Matrix Adaptation Evolution Strategy for Optimization on General Quadratic Manifolds 一般二次流形优化的矩阵自适应进化策略设计
ACM Transactions on Evolutionary Learning Pub Date : 2022-07-27 DOI: 10.1145/3551394
Patrick Spettel, H. Beyer
{"title":"On the Design of a Matrix Adaptation Evolution Strategy for Optimization on General Quadratic Manifolds","authors":"Patrick Spettel, H. Beyer","doi":"10.1145/3551394","DOIUrl":"https://doi.org/10.1145/3551394","url":null,"abstract":"An evolution strategy design is presented that allows for an evolution on general quadratic manifolds. That is, it covers elliptic, parabolic, and hyperbolic equality constraints. The peculiarity of the presented algorithm design is that it is an interior point method. It evaluates the objective function only for feasible search parameter vectors and it evolves itself on the nonlinear constraint manifold. Such a characteristic is particularly important in situations where it is not possible to evaluate infeasible parameter vectors, e.g., in simulation-based optimization. This is achieved by a closed form transformation of an individual’s parameter vector, which is in contrast to iterative repair mechanisms. This constraint handling approach is incorporated into a matrix adaptation evolution strategy making such algorithms capable of handling problems containing the constraints considered. Results of different experiments are presented. A test problem consisting of a spherical objective function and a single hyperbolic/parabolic equality constraint is used. It is designed to be scalable in the dimension. As a further benchmark, the Thomson problem is used. Both problems are used to compare the performance of the developed algorithm with other optimization methods supporting constraints. The experiments show the effectiveness of the proposed algorithm on the considered problems. Additionally, an idea for handling multiple constraints is discussed. And for a better understanding of the dynamical behavior of the proposed algorithm, single run dynamics are presented.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123509778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Combining Evolution and Deep Reinforcement Learning for Policy Search: A Survey 结合进化和深度强化学习的策略搜索研究综述
ACM Transactions on Evolutionary Learning Pub Date : 2022-03-26 DOI: 10.1145/3569096
Olivier Sigaud
{"title":"Combining Evolution and Deep Reinforcement Learning for Policy Search: A Survey","authors":"Olivier Sigaud","doi":"10.1145/3569096","DOIUrl":"https://doi.org/10.1145/3569096","url":null,"abstract":"Deep neuroevolution and deep Reinforcement Learning have received a lot of attention over the past few years. Some works have compared them, highlighting their pros and cons, but an emerging trend combines them so as to benefit from the best of both worlds. In this article, we provide a survey of this emerging trend by organizing the literature into related groups of works and casting all the existing combinations in each group into a generic framework. We systematically cover all easily available papers irrespective of their publication status, focusing on the combination mechanisms rather than on the experimental results. In total, we cover 45 algorithms more recent than 2017. We hope this effort will favor the growth of the domain by facilitating the understanding of the relationships between the methods, leading to deeper analyses, outlining missing useful comparisons and suggesting new combinations of mechanisms.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130179751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Analysis of Evolutionary Diversity Optimization for Permutation Problems 排列问题的进化多样性优化分析
ACM Transactions on Evolutionary Learning Pub Date : 2021-02-23 DOI: 10.1145/3561974
A. Do, Mingyu Guo, Aneta Neumann, F. Neumann
{"title":"Analysis of Evolutionary Diversity Optimization for Permutation Problems","authors":"A. Do, Mingyu Guo, Aneta Neumann, F. Neumann","doi":"10.1145/3561974","DOIUrl":"https://doi.org/10.1145/3561974","url":null,"abstract":"Generating diverse populations of high-quality solutions has gained interest as a promising extension to the traditional optimization tasks. This work contributes to this line of research with an investigation on evolutionary diversity optimization for three of the most well-studied permutation problems: the Traveling Salesperson Problem (TSP), both symmetric and asymmetric variants, and the Quadratic Assignment Problem (QAP). It includes an analysis of the worst-case performance of a simple mutation-only evolutionary algorithm with different mutation operators, using an established diversity measure. Theoretical results show that many mutation operators for these problems guarantee convergence to maximally diverse populations of sufficiently small size within cubic to quartic expected runtime. On the other hand, the results regarding QAP suggest that strong mutations give poor worst-case performance, as mutation strength contributes exponentially to the expected runtime. Additionally, experiments are carried out on QAPLIB and synthetic instances in unconstrained and constrained settings, and reveal much more optimistic practical performances while corroborating the theoretical findings regarding mutation strength. These results should serve as a baseline for future studies.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134535604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Evolving Software: Combining Online Learning with Mutation-Based Stochastic Search 进化软件:结合在线学习和基于突变的随机搜索
ACM Transactions on Evolutionary Learning Pub Date : 1900-01-01 DOI: 10.1145/3597617
Tiwonge Msulira Banda, Alexandru-Ciprian Zavoianu, Andrei V. Petrovski, Daniel Wöckinger, G. Bramerdorfer
{"title":"Evolving Software: Combining Online Learning with Mutation-Based Stochastic Search","authors":"Tiwonge Msulira Banda, Alexandru-Ciprian Zavoianu, Andrei V. Petrovski, Daniel Wöckinger, G. Bramerdorfer","doi":"10.1145/3597617","DOIUrl":"https://doi.org/10.1145/3597617","url":null,"abstract":"Evolutionary algorithms and related mutation-based methods have been used in software engineering, with recent emphasis on the problem of repairing bugs. In this work, programs are typically not synthesized from a random start. Instead, existing solutions—which may be flawed or inefficient—are taken as starting points, with the evolutionary process searching for useful improvements. This approach, however, introduces a challenge for the search algorithm: what is the optimal number of neutral mutations that should be combined? Too much is likely to introduce errors and break the program while too little hampers the search process, inducing the classic tradeoff between exploration and exploitation. In the context of software improvement, this paper considers MWRepair, an algorithm for enhancing mutation-based searches, which uses online learning to optimize the tradeoff between exploration and exploitation. The aggressiveness parameter governs how many individual mutations should be applied simultaneously to an individual between fitness evaluations. MWRepair is evaluated in the context of Automated Program Repair (APR) problems, where the goal is repairing software bugs with minimal human involvement. This paper analyzes the search space for APR induced by neutral mutations, finding that the greatest probability of finding successful repairs often occurs when many neutral mutations are applied to the original program. Moreover, repair probability follows a characteristic, unimodal distribution. MWRepair uses online learning to leverage this property, finding both rare and multi-edit repairs to defects in the popular Defects4J benchmark set of buggy Java programs.","PeriodicalId":220659,"journal":{"name":"ACM Transactions on Evolutionary Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129614409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信