{"title":"帕累托集合预测辅助双层多目标优化","authors":"Bing Wang, Hemant K. Singh, Tapabrata Ray","doi":"arxiv-2409.03328","DOIUrl":null,"url":null,"abstract":"Bilevel optimization problems comprise an upper level optimization task that\ncontains a lower level optimization task as a constraint. While there is a\nsignificant and growing literature devoted to solving bilevel problems with\nsingle objective at both levels using evolutionary computation, there is\nrelatively scarce work done to address problems with multiple objectives\n(BLMOP) at both levels. For black-box BLMOPs, the existing evolutionary\ntechniques typically utilize nested search, which in its native form consumes\nlarge number of function evaluations. In this work, we propose to reduce this\nexpense by predicting the lower level Pareto set for a candidate upper level\nsolution directly, instead of conducting an optimization from scratch. Such a\nprediction is significantly challenging for BLMOPs as it involves one-to-many\nmapping scenario. We resolve this bottleneck by supplementing the dataset using\na helper variable and construct a neural network, which can then be trained to\nmap the variables in a meaningful manner. Then, we embed this initialization\nwithin a bilevel optimization framework, termed Pareto set prediction assisted\nevolutionary bilevel multi-objective optimization (PSP-BLEMO). Systematic\nexperiments with existing state-of-the-art methods are presented to demonstrate\nits benefit. The experiments show that the proposed approach is competitive\nacross a range of problems, including both deceptive and non-deceptive problems","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"33 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Pareto Set Prediction Assisted Bilevel Multi-objective Optimization\",\"authors\":\"Bing Wang, Hemant K. Singh, Tapabrata Ray\",\"doi\":\"arxiv-2409.03328\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Bilevel optimization problems comprise an upper level optimization task that\\ncontains a lower level optimization task as a constraint. While there is a\\nsignificant and growing literature devoted to solving bilevel problems with\\nsingle objective at both levels using evolutionary computation, there is\\nrelatively scarce work done to address problems with multiple objectives\\n(BLMOP) at both levels. For black-box BLMOPs, the existing evolutionary\\ntechniques typically utilize nested search, which in its native form consumes\\nlarge number of function evaluations. In this work, we propose to reduce this\\nexpense by predicting the lower level Pareto set for a candidate upper level\\nsolution directly, instead of conducting an optimization from scratch. Such a\\nprediction is significantly challenging for BLMOPs as it involves one-to-many\\nmapping scenario. We resolve this bottleneck by supplementing the dataset using\\na helper variable and construct a neural network, which can then be trained to\\nmap the variables in a meaningful manner. Then, we embed this initialization\\nwithin a bilevel optimization framework, termed Pareto set prediction assisted\\nevolutionary bilevel multi-objective optimization (PSP-BLEMO). Systematic\\nexperiments with existing state-of-the-art methods are presented to demonstrate\\nits benefit. The experiments show that the proposed approach is competitive\\nacross a range of problems, including both deceptive and non-deceptive problems\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.03328\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.03328","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Pareto Set Prediction Assisted Bilevel Multi-objective Optimization
Bilevel optimization problems comprise an upper level optimization task that
contains a lower level optimization task as a constraint. While there is a
significant and growing literature devoted to solving bilevel problems with
single objective at both levels using evolutionary computation, there is
relatively scarce work done to address problems with multiple objectives
(BLMOP) at both levels. For black-box BLMOPs, the existing evolutionary
techniques typically utilize nested search, which in its native form consumes
large number of function evaluations. In this work, we propose to reduce this
expense by predicting the lower level Pareto set for a candidate upper level
solution directly, instead of conducting an optimization from scratch. Such a
prediction is significantly challenging for BLMOPs as it involves one-to-many
mapping scenario. We resolve this bottleneck by supplementing the dataset using
a helper variable and construct a neural network, which can then be trained to
map the variables in a meaningful manner. Then, we embed this initialization
within a bilevel optimization framework, termed Pareto set prediction assisted
evolutionary bilevel multi-objective optimization (PSP-BLEMO). Systematic
experiments with existing state-of-the-art methods are presented to demonstrate
its benefit. The experiments show that the proposed approach is competitive
across a range of problems, including both deceptive and non-deceptive problems