{"title":"A Labor Division Artificial Gorilla Troops Algorithm for Engineering Optimization.","authors":"Chenhuizi Liu, Bowen Wu, Liangkuan Zhu","doi":"10.3390/biomimetics10030127","DOIUrl":null,"url":null,"abstract":"<p><p>The Artificial Gorilla Troops Optimizer (GTO) has emerged as an efficient metaheuristic technique for solving complex optimization problems. However, the conventional GTO algorithm has a critical limitation: all individuals, regardless of their roles, utilize identical search equations and perform exploration and exploitation sequentially. This uniform approach neglects the potential benefits of labor division, consequently restricting the algorithm's performance. To address this limitation, we propose an enhanced Labor Division Gorilla Troops Optimizer (LDGTO), which incorporates natural mechanisms of labor division and outcome allocation. In the labor division phase, a stimulus-response model is designed to differentiate exploration and exploitation tasks, enabling gorilla individuals to adaptively adjust their search equations based on environmental changes. In the outcome allocation phase, three behavioral development modes-self-enhancement, competence maintenance, and elimination-are implemented, corresponding to three developmental stages: elite, average, and underperforming individuals. The performance of LDGTO is rigorously evaluated through three benchmark test suites, comprising 12 unimodal, 25 multimodal, and 10 combinatorial functions, as well as two real-world engineering applications, including four-bar transplanter mechanism design and color image segmentation. Experimental results demonstrate that LDGTO consistently outperforms three variants of GTO and seven state-of-the-art metaheuristic algorithms in most test cases.</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 3","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11940603/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10030127","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
The Artificial Gorilla Troops Optimizer (GTO) has emerged as an efficient metaheuristic technique for solving complex optimization problems. However, the conventional GTO algorithm has a critical limitation: all individuals, regardless of their roles, utilize identical search equations and perform exploration and exploitation sequentially. This uniform approach neglects the potential benefits of labor division, consequently restricting the algorithm's performance. To address this limitation, we propose an enhanced Labor Division Gorilla Troops Optimizer (LDGTO), which incorporates natural mechanisms of labor division and outcome allocation. In the labor division phase, a stimulus-response model is designed to differentiate exploration and exploitation tasks, enabling gorilla individuals to adaptively adjust their search equations based on environmental changes. In the outcome allocation phase, three behavioral development modes-self-enhancement, competence maintenance, and elimination-are implemented, corresponding to three developmental stages: elite, average, and underperforming individuals. The performance of LDGTO is rigorously evaluated through three benchmark test suites, comprising 12 unimodal, 25 multimodal, and 10 combinatorial functions, as well as two real-world engineering applications, including four-bar transplanter mechanism design and color image segmentation. Experimental results demonstrate that LDGTO consistently outperforms three variants of GTO and seven state-of-the-art metaheuristic algorithms in most test cases.