Evolutionary Computation最新文献

筛选
英文 中文
Genetic Programming for Evolving Similarity Functions for Clustering: Representations and Analysis 进化聚类相似函数的遗传规划:表示与分析
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-12-02 DOI: 10.1162/evco_a_00264
Andrew Lensen;Bing Xue;Mengjie Zhang
{"title":"Genetic Programming for Evolving Similarity Functions for Clustering: Representations and Analysis","authors":"Andrew Lensen;Bing Xue;Mengjie Zhang","doi":"10.1162/evco_a_00264","DOIUrl":"10.1162/evco_a_00264","url":null,"abstract":"<para>Clustering is a difficult and widely studied data mining task, with many varieties of clustering algorithms proposed in the literature. Nearly all algorithms use a similarity measure such as a distance metric (e.g., Euclidean distance) to decide which instances to assign to the same cluster. These similarity measures are generally predefined and cannot be easily tailored to the properties of a particular dataset, which leads to limitations in the quality and the interpretability of the clusters produced. In this article, we propose a new approach to automatically evolving similarity functions for a given clustering algorithm by using genetic programming. We introduce a new genetic programming-based method which automatically selects a small subset of features (feature selection) and then combines them using a variety of functions (feature construction) to produce dynamic and flexible similarity functions that are specifically designed for a given dataset. We demonstrate how the evolved similarity functions can be used to perform clustering using a graph-based representation. The results of a variety of experiments across a range of large, high-dimensional datasets show that the proposed approach can achieve higher and more consistent performance than the benchmark methods. We further extend the proposed approach to automatically produce multiple complementary similarity functions by using a multi-tree approach, which gives further performance improvements. We also analyse the interpretability and structure of the automatically evolved similarity functions to provide insight into how and why they are superior to standard distance metrics.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00264","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64541083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Evolutionary Image Transition and Painting Using Random Walks 进化图像转换与随机行走绘画
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-12-02 DOI: 10.1162/evco_a_00270
Aneta Neumann;Bradley Alexander;Frank Neumann
{"title":"Evolutionary Image Transition and Painting Using Random Walks","authors":"Aneta Neumann;Bradley Alexander;Frank Neumann","doi":"10.1162/evco_a_00270","DOIUrl":"10.1162/evco_a_00270","url":null,"abstract":"<para>We present a study demonstrating how random walk algorithms can be used for evolutionary image transition. We design different mutation operators based on uniform and biased random walks and study how their combination with a baseline mutation operator can lead to interesting image transition processes in terms of visual effects and artistic features. Using feature-based analysis we investigate the evolutionary image transition behaviour with respect to different features and evaluate the images constructed during the image transition process. Afterwards, we investigate how modifications of our biased random walk approaches can be used for evolutionary image painting. We introduce an evolutionary image painting approach whose underlying biased random walk can be controlled by a parameter influencing the bias of the random walk and thereby creating different artistic painting effects.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37680160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Errata: Convergence Analysis of Evolutionary Algorithms That Are Based on the Paradigm of Information Geometry 勘误表:基于信息几何范式的进化算法的收敛性分析
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-12-02 DOI: 10.1162/evco_x_00281
Hans-Georg Beyer
{"title":"Errata: Convergence Analysis of Evolutionary Algorithms That Are Based on the Paradigm of Information Geometry","authors":"Hans-Georg Beyer","doi":"10.1162/evco_x_00281","DOIUrl":"10.1162/evco_x_00281","url":null,"abstract":"","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38660291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolved Transistor Array Robot Controllers 进化型晶体管阵列机器人控制器
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-12-02 DOI: 10.1162/evco_a_00272
Michael Garvie;Ittai Flascher;Andrew Philippides;Adrian Thompson;Phil Husbands
{"title":"Evolved Transistor Array Robot Controllers","authors":"Michael Garvie;Ittai Flascher;Andrew Philippides;Adrian Thompson;Phil Husbands","doi":"10.1162/evco_a_00272","DOIUrl":"10.1162/evco_a_00272","url":null,"abstract":"<para>For the first time, a field programmable transistor array (FPTA) was used to evolve robot control circuits directly in analog hardware. Controllers were successfully incrementally evolved for a physical robot engaged in a series of visually guided behaviours, including finding a target in a complex environment where the goal was hidden from most locations. Circuits for recognising spoken commands were also evolved and these were used in conjunction with the controllers to enable voice control of the robot, triggering behavioural switching. Poor quality visual sensors were deliberately used to test the ability of evolved analog circuits to deal with noisy uncertain data in realtime. Visual features were coevolved with the controllers to automatically achieve dimensionality reduction and feature extraction and selection in an integrated way. An efficient new method was developed for simulating the robot in its visual environment. This allowed controllers to be evaluated in a simulation connected to the FPTA. The controllers then transferred seamlessly to the real world. The circuit replication issue was also addressed in experiments where circuits were evolved to be able to function correctly in multiple areas of the FPTA. A methodology was developed to analyse the evolved circuits which provided insights into their operation. Comparative experiments demonstrated the superior evolvability of the transistor array medium.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00272","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37890892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Difficulty Adjustable and Scalable Constrained Multiobjective Test Problem Toolkit 难度可调整和可扩展的受限多目标测试问题工具包
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00259
Zhun Fan;Wenji Li;Xinye Cai;Hui Li;Caimin Wei;Qingfu Zhang;Kalyanmoy Deb;Erik Goodman
{"title":"Difficulty Adjustable and Scalable Constrained Multiobjective Test Problem Toolkit","authors":"Zhun Fan;Wenji Li;Xinye Cai;Hui Li;Caimin Wei;Qingfu Zhang;Kalyanmoy Deb;Erik Goodman","doi":"10.1162/evco_a_00259","DOIUrl":"10.1162/evco_a_00259","url":null,"abstract":"<para>Multiobjective evolutionary algorithms (MOEAs) have progressed significantly in recent decades, but most of them are designed to solve unconstrained multiobjective optimization problems. In fact, many real-world multiobjective problems contain a number of constraints. To promote research on constrained multiobjective optimization, we first propose a problem classification scheme with three primary types of difficulty, which reflect various types of challenges presented by real-world optimization problems, in order to characterize the constraint functions in constrained multiobjective optimization problems (CMOPs). These are feasibility-hardness, convergence-hardness, and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable CMOPs (DAS-CMOPs, or DAS-CMaOPs when the number of objectives is greater than three) with three types of parameterized constraint functions developed to capture the three proposed types of difficulty. In fact, the combination of the three primary constraint functions with different parameters allows the construction of a large variety of CMOPs, with difficulty that can be defined by a triplet, with each of its parameters specifying the level of one of the types of primary difficulty. Furthermore, the number of objectives in this toolkit can be scaled beyond three. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs and nine CMaOPs, to be called DAS-CMOP1-9 and DAS-CMaOP1-9, respectively. To evaluate the proposed test problems, two popular CMOEAs—MOEA/D-CDP (MOEA/D with constraint dominance principle) and NSGA-II-CDP (NSGA-II with constraint dominance principle) and two popular constrained many-objective evolutionary algorithms (CMaOEAs)—C-MOEA/DD and C-NSGA-III—are used to compare performance on DAS-CMOP1-9 and DAS-CMaOP1-9 with a variety of difficulty triplets, respectively. The experimental results reveal that mechanisms in MOEA/D-CDP may be more effective in solving convergence-hard DAS-CMOPs, while mechanisms of NSGA-II-CDP may be more effective in solving DAS-CMOPs with simultaneous diversity-, feasibility-, and convergence-hardness. Mechanisms in C-NSGA-III may be more effective in solving feasibility-hard CMaOPs, while mechanisms of C-MOEA/DD may be more effective in solving CMaOPs with convergence-hardness. In addition, none of them can solve these problems efficiently, which stimulates us to continue to develop new CMOEAs and CMaOEAs to solve the suggested DAS-CMOPs and DAS-CMaOPs.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00259","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37266988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes* 简单超启发式算法最优控制领先者随机局部搜索的邻域大小*
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00258
Andrei Lissovoi;Pietro S. Oliveto;John Alasdair Warwicker
{"title":"Simple Hyper-Heuristics Control the Neighbourhood Size of Randomised Local Search Optimally for LeadingOnes*","authors":"Andrei Lissovoi;Pietro S. Oliveto;John Alasdair Warwicker","doi":"10.1162/evco_a_00258","DOIUrl":"https://doi.org/10.1162/evco_a_00258","url":null,"abstract":"<para>Selection hyper-heuristics (HHs) are randomised search methodologies which choose and execute heuristics during the optimisation process from a set of low-level heuristics. A machine learning mechanism is generally used to decide which low-level heuristic should be applied in each decision step. In this article, we analyse whether sophisticated learning mechanisms are always necessary for HHs to perform well. To this end we consider the most simple HHs from the literature and rigorously analyse their performance for the <small>LeadingOnes</small> benchmark function. Our analysis shows that the standard Simple Random, Permutation, Greedy, and Random Gradient HHs show no signs of learning. While the former HHs do not attempt to learn from the past performance of low-level heuristics, the idea behind the Random Gradient HH is to continue to exploit the currently selected heuristic as long as it is successful. Hence, it is embedded with a reinforcement learning mechanism with the shortest possible memory. However, the probability that a promising heuristic is successful in the next step is relatively low when perturbing a reasonable solution to a combinatorial optimisation problem. We generalise the “simple” Random Gradient HH so success can be measured over a fixed period of time <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula>, instead of a single iteration. For <small>LeadingOnes</small> we prove that the <italic>Generalised Random Gradient (GRG)</i> HH can learn to adapt the neighbourhood size of Randomised Local Search to optimality during the run. As a result, we prove it has the best possible performance achievable with the low-level heuristics (Randomised Local Search with different neighbourhood sizes), up to lower-order terms. We also prove that the performance of the HH improves as the number of low-level local search heuristics to choose from increases. In particular, with access to <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> low-level local search heuristics, it outperforms the best-possible algorithm using any subset of the <inline-formula><mml:math><mml:mi>k</mml:mi></mml:math></inline-formula> heuristics. Finally, we show that the advantages of GRG over Randomised Local Search and Evolutionary Algorithms using standard bit mutation increase if the anytime performance is considered (i.e., the performance gap is larger if approximate solutions are sought rather than exact ones). Experimental analyses confirm these results for different problem sizes (up to <inline-formula><mml:math><mml:mrow><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mn>10</mml:mn><mml:mn>8</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>) and shed some light on the best choices for the parameter <inline-formula><mml:math><mml:mi>τ</mml:mi></mml:math></inline-formula> in various situations.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50236517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
EvoComposer: An Evolutionary Algorithm for 4-Voice Music Compositions EvoComposer:一个四声部音乐作品的进化算法
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00265
R. De Prisco;G. Zaccagnino;R. Zaccagnino
{"title":"EvoComposer: An Evolutionary Algorithm for 4-Voice Music Compositions","authors":"R. De Prisco;G. Zaccagnino;R. Zaccagnino","doi":"10.1162/evco_a_00265","DOIUrl":"10.1162/evco_a_00265","url":null,"abstract":"<para>Evolutionary algorithms mimic evolutionary behaviors in order to solve problems. They have been successfully applied in many areas and appear to have a special relationship with creative problems; such a relationship, over the last two decades, has resulted in a long list of applications, including several in the field of music. In this article, we provide an evolutionary algorithm able to compose music. More specifically we consider the following 4-voice harmonization problem: one of the 4 voices (which are bass, tenor, alto, and soprano) is given as input and the composer has to write the other 3 voices in order to have a complete 4-voice piece of music with a 4-note chord for each input note. Solving such a problem means finding appropriate chords to use for each input note and also finding a placement of the notes within each chord so that melodic concerns are addressed. Such a problem is known as the <italic>unfigured harmonization problem</i>. The proposed algorithm for the unfigured harmonization problem, named <italic>EvoComposer</i>, uses a novel representation of the solutions in terms of chromosomes (that allows to handle both harmonic and nonharmonic tones), specialized operators (that exploit musical information to improve the quality of the produced individuals), and a novel <italic>hybrid</i> multiobjective evaluation function (based on an original statistical analysis of a large corpus of Bach's music). Moreover EvoComposer is the first evolutionary algorithm for this specific problem. EvoComposer is a multiobjective evolutionary algorithm, based on the well-known NSGA-II strategy, and takes into consideration two objectives: the harmonic objective, that is finding appropriate chords, and the melodic objective, that is finding appropriate melodic lines. The composing process is totally automatic, without any human intervention. We also provide an evaluation study showing that EvoComposer outperforms other metaheuristics by producing better solutions in terms of both well-known measures of <italic>performance</i>, such as hypervolume, <inline-formula><mml:math><mml:mi>Δ</mml:mi></mml:math></inline-formula> index, coverage of two sets, and standard measures of <italic>music creativity</i>. We conjecture that a similar approach can be useful also for similar musical problems.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00265","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45896199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies 协方差矩阵自适应进化策略的对角加速
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00260
Y. Akimoto;N. Hansen
{"title":"Diagonal Acceleration for Covariance Matrix Adaptation Evolution Strategies","authors":"Y. Akimoto;N. Hansen","doi":"10.1162/evco_a_00260","DOIUrl":"10.1162/evco_a_00260","url":null,"abstract":"<para>We introduce an acceleration for covariance matrix adaptation evolution strategies (CMA-ES) by means of <italic>adaptive diagonal decoding</i> (dd-CMA). This diagonal acceleration endows the default CMA-ES with the advantages of separable CMA-ES without inheriting its drawbacks. Technically, we introduce a diagonal matrix <inline-formula><mml:math><mml:mi>D</mml:mi></mml:math></inline-formula> that expresses coordinate-wise variances of the sampling distribution in <italic>DCD</i> form. The diagonal matrix can learn a rescaling of the problem in the coordinates within a linear number of function evaluations. Diagonal decoding can also exploit separability of the problem, but, crucially, does not compromise the performance on nonseparable problems. The latter is accomplished by modulating the learning rate for the diagonal matrix based on the condition number of the underlying correlation matrix. dd-CMA-ES not only combines the advantages of default and separable CMA-ES, but may achieve overadditive speedup: it improves the performance, and even the scaling, of the better of default and separable CMA-ES on classes of nonseparable test functions that reflect, arguably, a landscape feature commonly observed in practice.</para>\u0000 \u0000<para>The article makes two further secondary contributions: we introduce two different approaches to guarantee positive definiteness of the covariance matrix with active CMA, which is valuable in particular with large population size; we revise the default parameter setting in CMA-ES, proposing accelerated settings in particular for large dimension.</para>\u0000 \u0000<para>All our contributions can be viewed as independent improvements of CMA-ES, yet they are also complementary and can be seamlessly combined. In numerical experiments with dd-CMA-ES up to dimension 5120, we observe remarkable improvements over the original covariance matrix adaptation on functions with coordinate-wise ill-conditioning. The improvement is observed also for large population sizes up to about dimension squared.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00260","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37266986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Analysis of the (μ/μI,λ)-CSA-ES with Repair by Projection Applied to a Conically Constrained Problem 投影修复的(μ/μI,λ)-CSA-ES在圆锥约束问题中的应用分析
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00261
Patrick Spettel;Hans-Georg Beyer
{"title":"Analysis of the (μ/μI,λ)-CSA-ES with Repair by Projection Applied to a Conically Constrained Problem","authors":"Patrick Spettel;Hans-Georg Beyer","doi":"10.1162/evco_a_00261","DOIUrl":"https://doi.org/10.1162/evco_a_00261","url":null,"abstract":"<para>Theoretical analyses of evolution strategies are indispensable for gaining a deep understanding of their inner workings. For constrained problems, rather simple problems are of interest in the current research. This work presents a theoretical analysis of a multi-recombinative evolution strategy with cumulative step size adaptation applied to a conically constrained linear optimization problem. The state of the strategy is modeled by random variables and a stochastic iterative mapping is introduced. For the analytical treatment, fluctuations are neglected and the mean value iterative system is considered. Nonlinear difference equations are derived based on one-generation progress rates. Based on that, expressions for the steady state of the mean value iterative system are derived. By comparison with real algorithm runs, it is shown that for the considered assumptions, the theoretical derivations are able to predict the dynamics and the steady state values of the real runs.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00261","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50380679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Generating New Space-Filling Test Instances for Continuous Black-Box Optimization 为连续黑盒优化生成新的空间填充测试实例
IF 6.8 2区 计算机科学
Evolutionary Computation Pub Date : 2020-09-02 DOI: 10.1162/evco_a_00262
Mario A. Muñoz;Kate Smith-Miles
{"title":"Generating New Space-Filling Test Instances for Continuous Black-Box Optimization","authors":"Mario A. Muñoz;Kate Smith-Miles","doi":"10.1162/evco_a_00262","DOIUrl":"10.1162/evco_a_00262","url":null,"abstract":"<para>This article presents a method to generate diverse and challenging new test instances for continuous black-box optimization. Each instance is represented as a feature vector of exploratory landscape analysis measures. By projecting the features into a two-dimensional instance space, the location of existing test instances can be visualized, and their similarities and differences revealed. New instances are generated through genetic programming which evolves functions with controllable characteristics. Convergence to selected target points in the instance space is used to drive the evolutionary process, such that the new instances span the entire space more comprehensively. We demonstrate the method by generating two-dimensional functions to visualize its success, and ten-dimensional functions to test its scalability. We show that the method can recreate existing test functions when target points are co-located with existing functions, and can generate new functions with entirely different characteristics when target points are located in empty regions of the instance space. Moreover, we test the effectiveness of three state-of-the-art algorithms on the new set of instances. The results demonstrate that the new set is not only more diverse than a well-known benchmark set, but also more challenging for the tested algorithms. Hence, the method opens up a new avenue for developing test instances with controllable characteristics, necessary to expose the strengths and weaknesses of algorithms, and drive algorithm development.</para>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":null,"pages":null},"PeriodicalIF":6.8,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1162/evco_a_00262","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37150347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信