Yuxiao Huang, Xuebin Lv, Shenghao Wu, Jibin Wu, Liang Feng, Kay Chen Tan
{"title":"通过大型语言模型推进进化多任务中的自动知识转移","authors":"Yuxiao Huang, Xuebin Lv, Shenghao Wu, Jibin Wu, Liang Feng, Kay Chen Tan","doi":"arxiv-2409.04270","DOIUrl":null,"url":null,"abstract":"Evolutionary Multi-task Optimization (EMTO) is a paradigm that leverages\nknowledge transfer across simultaneously optimized tasks for enhanced search\nperformance. To facilitate EMTO's performance, various knowledge transfer\nmodels have been developed for specific optimization tasks. However, designing\nthese models often requires substantial expert knowledge. Recently, large\nlanguage models (LLMs) have achieved remarkable success in autonomous\nprogramming, aiming to produce effective solvers for specific problems. In this\nwork, a LLM-based optimization paradigm is introduced to establish an\nautonomous model factory for generating knowledge transfer models, ensuring\neffective and efficient knowledge transfer across various optimization tasks.\nTo evaluate the performance of the proposed method, we conducted comprehensive\nempirical studies comparing the knowledge transfer model generated by the LLM\nwith existing state-of-the-art knowledge transfer methods. The results\ndemonstrate that the generated model is able to achieve superior or competitive\nperformance against hand-crafted knowledge transfer models in terms of both\nefficiency and effectiveness.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"23 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advancing Automated Knowledge Transfer in Evolutionary Multitasking via Large Language Models\",\"authors\":\"Yuxiao Huang, Xuebin Lv, Shenghao Wu, Jibin Wu, Liang Feng, Kay Chen Tan\",\"doi\":\"arxiv-2409.04270\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evolutionary Multi-task Optimization (EMTO) is a paradigm that leverages\\nknowledge transfer across simultaneously optimized tasks for enhanced search\\nperformance. To facilitate EMTO's performance, various knowledge transfer\\nmodels have been developed for specific optimization tasks. However, designing\\nthese models often requires substantial expert knowledge. Recently, large\\nlanguage models (LLMs) have achieved remarkable success in autonomous\\nprogramming, aiming to produce effective solvers for specific problems. In this\\nwork, a LLM-based optimization paradigm is introduced to establish an\\nautonomous model factory for generating knowledge transfer models, ensuring\\neffective and efficient knowledge transfer across various optimization tasks.\\nTo evaluate the performance of the proposed method, we conducted comprehensive\\nempirical studies comparing the knowledge transfer model generated by the LLM\\nwith existing state-of-the-art knowledge transfer methods. The results\\ndemonstrate that the generated model is able to achieve superior or competitive\\nperformance against hand-crafted knowledge transfer models in terms of both\\nefficiency and effectiveness.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"23 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.04270\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.04270","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Advancing Automated Knowledge Transfer in Evolutionary Multitasking via Large Language Models
Evolutionary Multi-task Optimization (EMTO) is a paradigm that leverages
knowledge transfer across simultaneously optimized tasks for enhanced search
performance. To facilitate EMTO's performance, various knowledge transfer
models have been developed for specific optimization tasks. However, designing
these models often requires substantial expert knowledge. Recently, large
language models (LLMs) have achieved remarkable success in autonomous
programming, aiming to produce effective solvers for specific problems. In this
work, a LLM-based optimization paradigm is introduced to establish an
autonomous model factory for generating knowledge transfer models, ensuring
effective and efficient knowledge transfer across various optimization tasks.
To evaluate the performance of the proposed method, we conducted comprehensive
empirical studies comparing the knowledge transfer model generated by the LLM
with existing state-of-the-art knowledge transfer methods. The results
demonstrate that the generated model is able to achieve superior or competitive
performance against hand-crafted knowledge transfer models in terms of both
efficiency and effectiveness.