{"title":"Enhancing Generalization and Scalability for Multi-Objective Optimization with Population Pre-Training.","authors":"Haokai Hong, Liang Feng, Min Jiang, Kay Chen Tan","doi":"10.1162/EVCO.a.394","DOIUrl":null,"url":null,"abstract":"<p><p>Multi-objective optimization problems (MOPs) require the simultaneous optimization of conflicting objectives. Real-world MOPs often exhibit complex characteristics, including high-dimensional decision spaces, many objectives, or computationally expensive evaluations. While population-based evolutionary computation has shown promise in addressing diverse MOPs through problem-specific adaptations, existing approaches frequently lack generalizability across distinct problem classes. Inspired by pre-training paradigms in machine learning, we propose a Population Pre-trained Model (PPM) that leverages historical optimization knowledge to solve complex MOPs within a unified framework efficiently. PPM models evolutionary patterns via population modeling, addressing two key challenges: (1) handling diverse decision spaces across problems and (2) capturing the interdependency between objective and decision spaces during evolution. To this end, we develop a population transformer architecture that embeds decision spaces of varying scales into a common latent space, enabling knowledge transfer across diverse problems. Furthermore, our architecture integrates objective-space features through objective fusion to enhance population prediction accuracy for complex MOPs. Our approach achieves robust generalization to downstream optimization tasks with up to 5,000 dimensions-five times the training scale and 200 times greater than prior work. Extensive evaluations on standardized benchmarks and out-of-training real-world applications demonstrate the consistent superiority of our method over state-of-the-art algorithms tailored to specific problem classes, improving the performance and generalization of evolutionary computation in solving MOPs.</p>","PeriodicalId":50470,"journal":{"name":"Evolutionary Computation","volume":" ","pages":"1-32"},"PeriodicalIF":3.4000,"publicationDate":"2026-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evolutionary Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/EVCO.a.394","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multi-objective optimization problems (MOPs) require the simultaneous optimization of conflicting objectives. Real-world MOPs often exhibit complex characteristics, including high-dimensional decision spaces, many objectives, or computationally expensive evaluations. While population-based evolutionary computation has shown promise in addressing diverse MOPs through problem-specific adaptations, existing approaches frequently lack generalizability across distinct problem classes. Inspired by pre-training paradigms in machine learning, we propose a Population Pre-trained Model (PPM) that leverages historical optimization knowledge to solve complex MOPs within a unified framework efficiently. PPM models evolutionary patterns via population modeling, addressing two key challenges: (1) handling diverse decision spaces across problems and (2) capturing the interdependency between objective and decision spaces during evolution. To this end, we develop a population transformer architecture that embeds decision spaces of varying scales into a common latent space, enabling knowledge transfer across diverse problems. Furthermore, our architecture integrates objective-space features through objective fusion to enhance population prediction accuracy for complex MOPs. Our approach achieves robust generalization to downstream optimization tasks with up to 5,000 dimensions-five times the training scale and 200 times greater than prior work. Extensive evaluations on standardized benchmarks and out-of-training real-world applications demonstrate the consistent superiority of our method over state-of-the-art algorithms tailored to specific problem classes, improving the performance and generalization of evolutionary computation in solving MOPs.
期刊介绍:
Evolutionary Computation is a leading journal in its field. It provides an international forum for facilitating and enhancing the exchange of information among researchers involved in both the theoretical and practical aspects of computational systems drawing their inspiration from nature, with particular emphasis on evolutionary models of computation such as genetic algorithms, evolutionary strategies, classifier systems, evolutionary programming, and genetic programming. It welcomes articles from related fields such as swarm intelligence (e.g. Ant Colony Optimization and Particle Swarm Optimization), and other nature-inspired computation paradigms (e.g. Artificial Immune Systems). As well as publishing articles describing theoretical and/or experimental work, the journal also welcomes application-focused papers describing breakthrough results in an application domain or methodological papers where the specificities of the real-world problem led to significant algorithmic improvements that could possibly be generalized to other areas.