Xuefen Chen, Chunming Ye, Yang Zhang, Lingwei Zhao, Jing Guo, Kun Ma
{"title":"Strengthened teaching–learning-based optimization algorithm for numerical optimization tasks","authors":"Xuefen Chen, Chunming Ye, Yang Zhang, Lingwei Zhao, Jing Guo, Kun Ma","doi":"10.1007/s12065-023-00839-x","DOIUrl":null,"url":null,"abstract":"The teaching–learning-based optimization algorithm (TLBO) is an efficient optimizer. However, it has several shortcomings such as premature convergence and stagnation at local optima. In this paper, the strengthened teaching–learning-based optimization algorithm (STLBO) is proposed to enhance the basic TLBO’s exploration and exploitation properties by introducing three strengthening mechanisms: the linear increasing teaching factor, the elite system composed of new teacher and class leader, and the Cauchy mutation. Subsequently, seven variants of STLBO are designed based on the combined deployment of the three improved mechanisms. Performance of the novel STLBOs is evaluated by implementing them on thirteen numerical optimization tasks, including the seven unimodal tasks (f1–f7) and six multimodal tasks (f8–f13). The results show that STLBO7 is at the top of the list, significantly better than the original TLBO. Moreover, the remaining six variants of STLBO also outperform TLBO. Finally, a set of comparisons are implemented between STLBO7 and other advanced optimization techniques, such as HS, PSO, MFO, GA and HHO. The numerical results and convergence curves prove that STLBO7 clearly outperforms other competitors, has stronger local optimal avoidance, faster convergence speed and higher solution accuracy. All the above manifests that STLBOs has improved the search performance of TLBO. Data Availability Statements: All data generated or analyzed during this study are included in this published article (and its supplementary information files).","PeriodicalId":46237,"journal":{"name":"Evolutionary Intelligence","volume":"104 1","pages":"0"},"PeriodicalIF":2.3000,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evolutionary Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s12065-023-00839-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The teaching–learning-based optimization algorithm (TLBO) is an efficient optimizer. However, it has several shortcomings such as premature convergence and stagnation at local optima. In this paper, the strengthened teaching–learning-based optimization algorithm (STLBO) is proposed to enhance the basic TLBO’s exploration and exploitation properties by introducing three strengthening mechanisms: the linear increasing teaching factor, the elite system composed of new teacher and class leader, and the Cauchy mutation. Subsequently, seven variants of STLBO are designed based on the combined deployment of the three improved mechanisms. Performance of the novel STLBOs is evaluated by implementing them on thirteen numerical optimization tasks, including the seven unimodal tasks (f1–f7) and six multimodal tasks (f8–f13). The results show that STLBO7 is at the top of the list, significantly better than the original TLBO. Moreover, the remaining six variants of STLBO also outperform TLBO. Finally, a set of comparisons are implemented between STLBO7 and other advanced optimization techniques, such as HS, PSO, MFO, GA and HHO. The numerical results and convergence curves prove that STLBO7 clearly outperforms other competitors, has stronger local optimal avoidance, faster convergence speed and higher solution accuracy. All the above manifests that STLBOs has improved the search performance of TLBO. Data Availability Statements: All data generated or analyzed during this study are included in this published article (and its supplementary information files).
期刊介绍:
This Journal provides an international forum for the timely publication and dissemination of foundational and applied research in the domain of Evolutionary Intelligence. The spectrum of emerging fields in contemporary artificial intelligence, including Big Data, Deep Learning, Computational Neuroscience bridged with evolutionary computing and other population-based search methods constitute the flag of Evolutionary Intelligence Journal.Topics of interest for Evolutionary Intelligence refer to different aspects of evolutionary models of computation empowered with intelligence-based approaches, including but not limited to architectures, model optimization and tuning, machine learning algorithms, life inspired adaptive algorithms, swarm-oriented strategies, high performance computing, massive data processing, with applications to domains like computer vision, image processing, simulation, robotics, computational finance, media, internet of things, medicine, bioinformatics, smart cities, and similar. Surveys outlining the state of art in specific subfields and applications are welcome.