{"title":"HRA:元搜索优化算法排序的多标准框架","authors":"Evgenia-Maria K. Goula, Dimitris G. Sotiropoulos","doi":"arxiv-2409.11617","DOIUrl":null,"url":null,"abstract":"Metaheuristic algorithms are essential for solving complex optimization\nproblems in different fields. However, the difficulty in comparing and rating\nthese algorithms remains due to the wide range of performance metrics and\nproblem dimensions usually involved. On the other hand, nonparametric\nstatistical methods and post hoc tests are time-consuming, especially when we\nonly need to identify the top performers among many algorithms. The\nHierarchical Rank Aggregation (HRA) algorithm aims to efficiently rank\nmetaheuristic algorithms based on their performance across many criteria and\ndimensions. The HRA employs a hierarchical framework that begins with\ncollecting performance metrics on various benchmark functions and dimensions.\nRank-based normalization is employed for each performance measure to ensure\ncomparability and the robust TOPSIS aggregation is applied to combine these\nrankings at several hierarchical levels, resulting in a comprehensive ranking\nof the algorithms. Our study uses data from the CEC 2017 competition to\ndemonstrate the robustness and efficacy of the HRA framework. It examines 30\nbenchmark functions and evaluates the performance of 13 metaheuristic\nalgorithms across five performance indicators in four distinct dimensions. This\npresentation highlights the potential of the HRA to enhance the interpretation\nof the comparative advantages and disadvantages of various algorithms by\nsimplifying practitioners' choices of the most appropriate algorithm for\ncertain optimization problems.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"31 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HRA: A Multi-Criteria Framework for Ranking Metaheuristic Optimization Algorithms\",\"authors\":\"Evgenia-Maria K. Goula, Dimitris G. Sotiropoulos\",\"doi\":\"arxiv-2409.11617\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Metaheuristic algorithms are essential for solving complex optimization\\nproblems in different fields. However, the difficulty in comparing and rating\\nthese algorithms remains due to the wide range of performance metrics and\\nproblem dimensions usually involved. On the other hand, nonparametric\\nstatistical methods and post hoc tests are time-consuming, especially when we\\nonly need to identify the top performers among many algorithms. The\\nHierarchical Rank Aggregation (HRA) algorithm aims to efficiently rank\\nmetaheuristic algorithms based on their performance across many criteria and\\ndimensions. The HRA employs a hierarchical framework that begins with\\ncollecting performance metrics on various benchmark functions and dimensions.\\nRank-based normalization is employed for each performance measure to ensure\\ncomparability and the robust TOPSIS aggregation is applied to combine these\\nrankings at several hierarchical levels, resulting in a comprehensive ranking\\nof the algorithms. Our study uses data from the CEC 2017 competition to\\ndemonstrate the robustness and efficacy of the HRA framework. It examines 30\\nbenchmark functions and evaluates the performance of 13 metaheuristic\\nalgorithms across five performance indicators in four distinct dimensions. This\\npresentation highlights the potential of the HRA to enhance the interpretation\\nof the comparative advantages and disadvantages of various algorithms by\\nsimplifying practitioners' choices of the most appropriate algorithm for\\ncertain optimization problems.\",\"PeriodicalId\":501291,\"journal\":{\"name\":\"arXiv - CS - Performance\",\"volume\":\"31 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Performance\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11617\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11617","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
元启发式算法对于解决不同领域的复杂优化问题至关重要。然而,由于通常涉及多种性能指标和问题维度,对这些算法进行比较和评级仍然存在困难。另一方面,非参数统计方法和事后检验非常耗时,尤其是当我们只需要从众多算法中找出性能最好的算法时。分层排名聚合(HRA)算法旨在根据元启发式算法在多个标准和维度上的表现对其进行有效排名。HRA 采用分层框架,首先收集各种基准函数和维度的性能指标,然后对每个性能指标进行基于等级的归一化以确保可比性,最后采用稳健的 TOPSIS 聚合法将多个分层级别的排名结合起来,从而得出算法的综合排名。我们的研究使用了 2017 年 CEC 竞赛的数据来展示 HRA 框架的鲁棒性和有效性。它考察了 30 个基准函数,并从四个不同维度的五个性能指标评估了 13 种元搜索算法的性能。该报告强调了 HRA 的潜力,即通过简化实践者对特定优化问题最合适算法的选择,增强对各种算法优缺点的解释。
HRA: A Multi-Criteria Framework for Ranking Metaheuristic Optimization Algorithms
Metaheuristic algorithms are essential for solving complex optimization
problems in different fields. However, the difficulty in comparing and rating
these algorithms remains due to the wide range of performance metrics and
problem dimensions usually involved. On the other hand, nonparametric
statistical methods and post hoc tests are time-consuming, especially when we
only need to identify the top performers among many algorithms. The
Hierarchical Rank Aggregation (HRA) algorithm aims to efficiently rank
metaheuristic algorithms based on their performance across many criteria and
dimensions. The HRA employs a hierarchical framework that begins with
collecting performance metrics on various benchmark functions and dimensions.
Rank-based normalization is employed for each performance measure to ensure
comparability and the robust TOPSIS aggregation is applied to combine these
rankings at several hierarchical levels, resulting in a comprehensive ranking
of the algorithms. Our study uses data from the CEC 2017 competition to
demonstrate the robustness and efficacy of the HRA framework. It examines 30
benchmark functions and evaluates the performance of 13 metaheuristic
algorithms across five performance indicators in four distinct dimensions. This
presentation highlights the potential of the HRA to enhance the interpretation
of the comparative advantages and disadvantages of various algorithms by
simplifying practitioners' choices of the most appropriate algorithm for
certain optimization problems.