Meta-learning-based adaptive operator selection for traveling salesman problem

IF 6.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ho Young Jeong , Byung Duk Song
{"title":"Meta-learning-based adaptive operator selection for traveling salesman problem","authors":"Ho Young Jeong ,&nbsp;Byung Duk Song","doi":"10.1016/j.asoc.2025.113930","DOIUrl":null,"url":null,"abstract":"<div><div>In evolutionary optimization, effectively leveraging knowledge about search operator performance is crucial for enhancing algorithmic results. Traditional operator selection strategies often rely on fixed heuristics or trial-and-error, which struggle to adapt to the nonstationary search dynamics of evolutionary runs—i.e., the stage-dependent, instance-dependent, and population-dependent shifts in operator effectiveness—and typically yield suboptimal performance. To address these challenges, we propose a novel meta-learning-based adaptive operator selection (AOS) framework. It leverages a Long Short-Term Memory (LSTM) neural network to learn temporal patterns of operator performance from historical data and dynamically adjust operator choice on-the-fly. The framework also integrates domain-specific biases to preserve population diversity and promote effective exploration, and it continuously updates its selection policy through dynamic online learning as the evolutionary process unfolds. Experiments on the Traveling Salesman Problem (TSP) benchmark demonstrate that the proposed LSTM-based AOS method significantly outperforms conventional approaches to operator selection. In particular, it achieved a median optimality gap of 9.87 % on a suite of TSP instances—approximately a 20 % improvement over the best fixed-operator configuration—indicating superior solution quality. Moreover, our approach consistently surpassed other state-of-the-art AOS techniques, underscoring the efficacy of the LSTM-driven framework and its significant potential to enhance evolutionary algorithm performance on complex optimization tasks.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"185 ","pages":"Article 113930"},"PeriodicalIF":6.6000,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625012438","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

In evolutionary optimization, effectively leveraging knowledge about search operator performance is crucial for enhancing algorithmic results. Traditional operator selection strategies often rely on fixed heuristics or trial-and-error, which struggle to adapt to the nonstationary search dynamics of evolutionary runs—i.e., the stage-dependent, instance-dependent, and population-dependent shifts in operator effectiveness—and typically yield suboptimal performance. To address these challenges, we propose a novel meta-learning-based adaptive operator selection (AOS) framework. It leverages a Long Short-Term Memory (LSTM) neural network to learn temporal patterns of operator performance from historical data and dynamically adjust operator choice on-the-fly. The framework also integrates domain-specific biases to preserve population diversity and promote effective exploration, and it continuously updates its selection policy through dynamic online learning as the evolutionary process unfolds. Experiments on the Traveling Salesman Problem (TSP) benchmark demonstrate that the proposed LSTM-based AOS method significantly outperforms conventional approaches to operator selection. In particular, it achieved a median optimality gap of 9.87 % on a suite of TSP instances—approximately a 20 % improvement over the best fixed-operator configuration—indicating superior solution quality. Moreover, our approach consistently surpassed other state-of-the-art AOS techniques, underscoring the efficacy of the LSTM-driven framework and its significant potential to enhance evolutionary algorithm performance on complex optimization tasks.
基于元学习的旅行商问题自适应算子选择
在进化优化中,有效地利用有关搜索算子性能的知识对于增强算法结果至关重要。传统的算子选择策略通常依赖于固定的启发式或试错法,难以适应进化运行的非平稳搜索动态。例如,操作员效率的阶段依赖、实例依赖和种群依赖的变化,通常会产生次优性能。为了解决这些挑战,我们提出了一种新的基于元学习的自适应算子选择(AOS)框架。它利用长短期记忆(LSTM)神经网络从历史数据中学习操作员性能的时间模式,并动态调整操作员的选择。该框架还集成了特定领域的偏见,以保护种群多样性和促进有效的探索,并通过动态在线学习随着进化过程的展开不断更新其选择策略。在旅行商问题(TSP)基准上的实验表明,基于lstm的AOS方法显著优于传统的算子选择方法。特别是,它在一组TSP实例上实现了9.87%的中值最优性差距——大约比最佳固定运营商配置提高了20%——这表明解决方案质量更好。此外,我们的方法始终优于其他最先进的AOS技术,强调了lstm驱动框架的有效性及其在复杂优化任务上提高进化算法性能的巨大潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Applied Soft Computing
Applied Soft Computing 工程技术-计算机:跨学科应用
CiteScore
15.80
自引率
6.90%
发文量
874
审稿时长
10.9 months
期刊介绍: Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities. Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信