{"title":"多目标进化算法公平比较的种群大小规范","authors":"H. Ishibuchi, Lie Meng Pang, Ke Shang","doi":"10.1109/SMC42975.2020.9282850","DOIUrl":null,"url":null,"abstract":"In general, performance comparison results of optimization algorithms depend on the parameter specifications in each algorithm. For fair comparison, it may be needed to use the best specifications for each algorithm instead of using the same specifications for all algorithms. This is because each algorithm has its best specifications. However, in the evolutionary multi-objective optimization (EMO) field, performance comparison has usually been performed under the same parameter specifications for all algorithms. Especially, the same population size has always been used. In this paper, we discuss this practice from a viewpoint of fair comparison of EMO algorithms. First, we demonstrate that performance comparison results depend on the population size. Next, we explain a new trend of performance comparison where each algorithm is evaluated by selecting a pre-specified number of solutions from the examined solutions (i.e., by selecting a solution subset with a pre-specified size). Then, we discuss the selected subset size specification. Through computational experiments, we show that performance comparison results do not strongly depend on the selected subset size while they depend on the population size.","PeriodicalId":6718,"journal":{"name":"2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)","volume":"35 1","pages":"1095-1102"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Population Size Specification for Fair Comparison of Multi-objective Evolutionary Algorithms\",\"authors\":\"H. Ishibuchi, Lie Meng Pang, Ke Shang\",\"doi\":\"10.1109/SMC42975.2020.9282850\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In general, performance comparison results of optimization algorithms depend on the parameter specifications in each algorithm. For fair comparison, it may be needed to use the best specifications for each algorithm instead of using the same specifications for all algorithms. This is because each algorithm has its best specifications. However, in the evolutionary multi-objective optimization (EMO) field, performance comparison has usually been performed under the same parameter specifications for all algorithms. Especially, the same population size has always been used. In this paper, we discuss this practice from a viewpoint of fair comparison of EMO algorithms. First, we demonstrate that performance comparison results depend on the population size. Next, we explain a new trend of performance comparison where each algorithm is evaluated by selecting a pre-specified number of solutions from the examined solutions (i.e., by selecting a solution subset with a pre-specified size). Then, we discuss the selected subset size specification. Through computational experiments, we show that performance comparison results do not strongly depend on the selected subset size while they depend on the population size.\",\"PeriodicalId\":6718,\"journal\":{\"name\":\"2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)\",\"volume\":\"35 1\",\"pages\":\"1095-1102\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SMC42975.2020.9282850\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMC42975.2020.9282850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Population Size Specification for Fair Comparison of Multi-objective Evolutionary Algorithms
In general, performance comparison results of optimization algorithms depend on the parameter specifications in each algorithm. For fair comparison, it may be needed to use the best specifications for each algorithm instead of using the same specifications for all algorithms. This is because each algorithm has its best specifications. However, in the evolutionary multi-objective optimization (EMO) field, performance comparison has usually been performed under the same parameter specifications for all algorithms. Especially, the same population size has always been used. In this paper, we discuss this practice from a viewpoint of fair comparison of EMO algorithms. First, we demonstrate that performance comparison results depend on the population size. Next, we explain a new trend of performance comparison where each algorithm is evaluated by selecting a pre-specified number of solutions from the examined solutions (i.e., by selecting a solution subset with a pre-specified size). Then, we discuss the selected subset size specification. Through computational experiments, we show that performance comparison results do not strongly depend on the selected subset size while they depend on the population size.