Enhancing Generalization and Scalability for Multi-Objective Optimization with Population Pre-Training.

IF 3.4 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Haokai Hong, Liang Feng, Min Jiang, Kay Chen Tan
{"title":"Enhancing Generalization and Scalability for Multi-Objective Optimization with Population Pre-Training.","authors":"Haokai Hong, Liang Feng, Min Jiang, Kay Chen Tan","doi":"10.1162/EVCO.a.394","DOIUrl":null,"url":null,"abstract":"<p><p>Multi-objective optimization problems (MOPs) require the simultaneous optimization of conflicting objectives. Real-world MOPs often exhibit complex characteristics, including high-dimensional decision spaces, many objectives, or computationally expensive evaluations. While population-based evolutionary computation has shown promise in addressing diverse MOPs through problem-specific adaptations, existing approaches frequently lack generalizability across distinct problem classes. Inspired by pre-training paradigms in machine learning, we propose a Population Pre-trained Model (PPM) that leverages historical optimization knowledge to solve complex MOPs within a unified framework efficiently. PPM models evolutionary patterns via population modeling, addressing two key challenges: (1) handling diverse decision spaces across problems and (2) capturing the interdependency between objective and decision spaces during evolution. To this end, we develop a population transformer architecture that embeds decision spaces of varying scales into a common latent space, enabling knowledge transfer across diverse problems. Furthermore, our architecture integrates objective-space features through objective fusion to enhance population prediction accuracy for complex MOPs. Our approach achieves robust generalization to downstream optimization tasks with up to 5,000 dimensions-five times the training scale and 200 times greater than prior work. Extensive evaluations on standardized benchmarks and out-of-training real-world applications demonstrate the consistent superiority of our method over state-of-the-art algorithms tailored to specific problem classes, improving the performance and generalization of evolutionary computation in solving MOPs.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-32"},"PeriodicalIF":3.4000,"publicationDate":"2026-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/EVCO.a.394","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-objective optimization problems (MOPs) require the simultaneous optimization of conflicting objectives. Real-world MOPs often exhibit complex characteristics, including high-dimensional decision spaces, many objectives, or computationally expensive evaluations. While population-based evolutionary computation has shown promise in addressing diverse MOPs through problem-specific adaptations, existing approaches frequently lack generalizability across distinct problem classes. Inspired by pre-training paradigms in machine learning, we propose a Population Pre-trained Model (PPM) that leverages historical optimization knowledge to solve complex MOPs within a unified framework efficiently. PPM models evolutionary patterns via population modeling, addressing two key challenges: (1) handling diverse decision spaces across problems and (2) capturing the interdependency between objective and decision spaces during evolution. To this end, we develop a population transformer architecture that embeds decision spaces of varying scales into a common latent space, enabling knowledge transfer across diverse problems. Furthermore, our architecture integrates objective-space features through objective fusion to enhance population prediction accuracy for complex MOPs. Our approach achieves robust generalization to downstream optimization tasks with up to 5,000 dimensions-five times the training scale and 200 times greater than prior work. Extensive evaluations on standardized benchmarks and out-of-training real-world applications demonstrate the consistent superiority of our method over state-of-the-art algorithms tailored to specific problem classes, improving the performance and generalization of evolutionary computation in solving MOPs.

用群体预训练增强多目标优化的泛化和可扩展性。
多目标优化问题(MOPs)要求同时优化相互冲突的目标。现实世界的MOPs通常表现出复杂的特征,包括高维决策空间、许多目标或计算昂贵的评估。虽然基于种群的进化计算已经显示出通过特定问题的适应来解决各种MOPs的希望,但现有的方法往往缺乏跨不同问题类别的通用性。受机器学习中的预训练范式的启发,我们提出了一种人口预训练模型(PPM),该模型利用历史优化知识在统一框架内有效地解决复杂的MOPs。PPM通过种群建模对进化模式进行建模,解决了两个关键挑战:(1)处理跨问题的不同决策空间;(2)在进化过程中捕获目标和决策空间之间的相互依赖关系。为此,我们开发了一个人口转换器架构,该架构将不同规模的决策空间嵌入到一个共同的潜在空间中,从而实现跨不同问题的知识转移。此外,我们的架构通过目标融合来整合目标空间特征,以提高复杂MOPs的种群预测精度。我们的方法实现了对多达5000个维度的下游优化任务的鲁棒泛化,这是训练规模的5倍,是之前工作的200倍。对标准化基准和非训练现实世界应用的广泛评估表明,我们的方法比针对特定问题类别量身定制的最先进算法具有一贯的优势,提高了求解MOPs的进化计算的性能和泛化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Evolutionary Computation
Evolutionary Computation 工程技术-计算机:理论方法
CiteScore
6.40
自引率
1.50%
发文量
20
审稿时长
3 months
期刊介绍: Evolutionary Computation is a leading journal in its field. It provides an international forum for facilitating and enhancing the exchange of information among researchers involved in both the theoretical and practical aspects of computational systems drawing their inspiration from nature, with particular emphasis on evolutionary models of computation such as genetic algorithms, evolutionary strategies, classifier systems, evolutionary programming, and genetic programming. It welcomes articles from related fields such as swarm intelligence (e.g. Ant Colony Optimization and Particle Swarm Optimization), and other nature-inspired computation paradigms (e.g. Artificial Immune Systems). As well as publishing articles describing theoretical and/or experimental work, the journal also welcomes application-focused papers describing breakthrough results in an application domain or methodological papers where the specificities of the real-world problem led to significant algorithmic improvements that could possibly be generalized to other areas.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书