{"title":"不同IEEE CEC竞赛获奖者对实际参数优化的分析:是否总是有改进?","authors":"D. Molina, F. Moreno-García, F. Herrera","doi":"10.1109/CEC.2017.7969392","DOIUrl":null,"url":null,"abstract":"For years, there have been organized single objective real-parameter optimization competitions on the IEEE Congress on Evolutionary Computation, in which the organizer define a common experimental, the researchers carry out the experiments with their proposals using it, and the obtained results are compared. It is a excellent way to know which algorithms (and ideas) can improve others, creating guidelines to improve the field. However, in several competitions the benchmark can change and the winners of previous benchmarks are not always introduced into the comparisons. Due to that, it could be not clear the improvement that new proposals offer against proposals of previous years. In this paper, we compare the winners in different years among them using the different proposed benchmarks, and we analyse the results obtained by all of them to observe whether there is an real improvement or not by the winner proposals of these competitions through the years.","PeriodicalId":335123,"journal":{"name":"2017 IEEE Congress on Evolutionary Computation (CEC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Analysis among winners of different IEEE CEC competitions on real-parameters optimization: Is there always improvement?\",\"authors\":\"D. Molina, F. Moreno-García, F. Herrera\",\"doi\":\"10.1109/CEC.2017.7969392\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For years, there have been organized single objective real-parameter optimization competitions on the IEEE Congress on Evolutionary Computation, in which the organizer define a common experimental, the researchers carry out the experiments with their proposals using it, and the obtained results are compared. It is a excellent way to know which algorithms (and ideas) can improve others, creating guidelines to improve the field. However, in several competitions the benchmark can change and the winners of previous benchmarks are not always introduced into the comparisons. Due to that, it could be not clear the improvement that new proposals offer against proposals of previous years. In this paper, we compare the winners in different years among them using the different proposed benchmarks, and we analyse the results obtained by all of them to observe whether there is an real improvement or not by the winner proposals of these competitions through the years.\",\"PeriodicalId\":335123,\"journal\":{\"name\":\"2017 IEEE Congress on Evolutionary Computation (CEC)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE Congress on Evolutionary Computation (CEC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC.2017.7969392\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Congress on Evolutionary Computation (CEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.2017.7969392","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysis among winners of different IEEE CEC competitions on real-parameters optimization: Is there always improvement?
For years, there have been organized single objective real-parameter optimization competitions on the IEEE Congress on Evolutionary Computation, in which the organizer define a common experimental, the researchers carry out the experiments with their proposals using it, and the obtained results are compared. It is a excellent way to know which algorithms (and ideas) can improve others, creating guidelines to improve the field. However, in several competitions the benchmark can change and the winners of previous benchmarks are not always introduced into the comparisons. Due to that, it could be not clear the improvement that new proposals offer against proposals of previous years. In this paper, we compare the winners in different years among them using the different proposed benchmarks, and we analyse the results obtained by all of them to observe whether there is an real improvement or not by the winner proposals of these competitions through the years.