{"title":"A Particle Swarm Optimization-Based Approach Coupled With Large Language Models for Prompt Optimization","authors":"Po-Cheng Hsieh, Wei-Po Lee","doi":"10.1111/exsy.70049","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Large language models (LLMs) have been developing rapidly to attract significant attention these days. These models have exhibited remarkable abilities in achieving various natural language processing (NLP) tasks, but the performance depends highly on the quality of prompting. Prompt engineering methods have been promoted for further extending the models' abilities to perform different applications. However, prompt engineering involves crafting input prompts for better accuracy and efficiency, demanding substantial expertise with trial-and-error effort. Automating the prompting process is important and can largely reduce human efforts in building suitable prompts. In this work, we develop a new metaheuristic algorithm to couple the Particle Swarm Optimization (PSO) technique and LLMs for prompt optimization. Our approach has some unique features: it can converge within only a small number of iterations (i.e., typically 10–20 iterations) to vastly reduce the expensive LLM usage cost; it can easily be applied to conduct many kinds of tasks owing to its simplicity and efficiency; and most importantly, it does not need to depend so much on the quality of initial prompts, because it can improve the prompts through learning more effectively based on enormous existing data. To evaluate the proposed approach, we conducted a series of experiments with several types of NLP datasets and compared them to others. The results highlight the importance of coupling metaheuristic search algorithms and LLMs for prompt optimization, proving that the presented approach can be adopted to enhance the performance of LLMs.</p>\n </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 6","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.70049","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) have been developing rapidly to attract significant attention these days. These models have exhibited remarkable abilities in achieving various natural language processing (NLP) tasks, but the performance depends highly on the quality of prompting. Prompt engineering methods have been promoted for further extending the models' abilities to perform different applications. However, prompt engineering involves crafting input prompts for better accuracy and efficiency, demanding substantial expertise with trial-and-error effort. Automating the prompting process is important and can largely reduce human efforts in building suitable prompts. In this work, we develop a new metaheuristic algorithm to couple the Particle Swarm Optimization (PSO) technique and LLMs for prompt optimization. Our approach has some unique features: it can converge within only a small number of iterations (i.e., typically 10–20 iterations) to vastly reduce the expensive LLM usage cost; it can easily be applied to conduct many kinds of tasks owing to its simplicity and efficiency; and most importantly, it does not need to depend so much on the quality of initial prompts, because it can improve the prompts through learning more effectively based on enormous existing data. To evaluate the proposed approach, we conducted a series of experiments with several types of NLP datasets and compared them to others. The results highlight the importance of coupling metaheuristic search algorithms and LLMs for prompt optimization, proving that the presented approach can be adopted to enhance the performance of LLMs.
期刊介绍:
Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper.
As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.