A Particle Swarm Optimization-Based Approach Coupled With Large Language Models for Prompt Optimization

IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Expert Systems Pub Date : 2025-04-29 DOI:10.1111/exsy.70049
Po-Cheng Hsieh, Wei-Po Lee
{"title":"A Particle Swarm Optimization-Based Approach Coupled With Large Language Models for Prompt Optimization","authors":"Po-Cheng Hsieh,&nbsp;Wei-Po Lee","doi":"10.1111/exsy.70049","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Large language models (LLMs) have been developing rapidly to attract significant attention these days. These models have exhibited remarkable abilities in achieving various natural language processing (NLP) tasks, but the performance depends highly on the quality of prompting. Prompt engineering methods have been promoted for further extending the models' abilities to perform different applications. However, prompt engineering involves crafting input prompts for better accuracy and efficiency, demanding substantial expertise with trial-and-error effort. Automating the prompting process is important and can largely reduce human efforts in building suitable prompts. In this work, we develop a new metaheuristic algorithm to couple the Particle Swarm Optimization (PSO) technique and LLMs for prompt optimization. Our approach has some unique features: it can converge within only a small number of iterations (i.e., typically 10–20 iterations) to vastly reduce the expensive LLM usage cost; it can easily be applied to conduct many kinds of tasks owing to its simplicity and efficiency; and most importantly, it does not need to depend so much on the quality of initial prompts, because it can improve the prompts through learning more effectively based on enormous existing data. To evaluate the proposed approach, we conducted a series of experiments with several types of NLP datasets and compared them to others. The results highlight the importance of coupling metaheuristic search algorithms and LLMs for prompt optimization, proving that the presented approach can be adopted to enhance the performance of LLMs.</p>\n </div>","PeriodicalId":51053,"journal":{"name":"Expert Systems","volume":"42 6","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/exsy.70049","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) have been developing rapidly to attract significant attention these days. These models have exhibited remarkable abilities in achieving various natural language processing (NLP) tasks, but the performance depends highly on the quality of prompting. Prompt engineering methods have been promoted for further extending the models' abilities to perform different applications. However, prompt engineering involves crafting input prompts for better accuracy and efficiency, demanding substantial expertise with trial-and-error effort. Automating the prompting process is important and can largely reduce human efforts in building suitable prompts. In this work, we develop a new metaheuristic algorithm to couple the Particle Swarm Optimization (PSO) technique and LLMs for prompt optimization. Our approach has some unique features: it can converge within only a small number of iterations (i.e., typically 10–20 iterations) to vastly reduce the expensive LLM usage cost; it can easily be applied to conduct many kinds of tasks owing to its simplicity and efficiency; and most importantly, it does not need to depend so much on the quality of initial prompts, because it can improve the prompts through learning more effectively based on enormous existing data. To evaluate the proposed approach, we conducted a series of experiments with several types of NLP datasets and compared them to others. The results highlight the importance of coupling metaheuristic search algorithms and LLMs for prompt optimization, proving that the presented approach can be adopted to enhance the performance of LLMs.

基于粒子群优化和大型语言模型的快速优化方法
近年来,大型语言模型(llm)发展迅速,引起了广泛的关注。这些模型在实现各种自然语言处理(NLP)任务方面表现出了卓越的能力,但其性能在很大程度上取决于提示的质量。提示工程方法已被推广,以进一步扩展模型的能力,以执行不同的应用。然而,提示工程涉及制作输入提示以获得更好的准确性和效率,需要大量的专业知识和反复试验的努力。自动化提示过程很重要,可以在很大程度上减少构建合适提示的人力。在这项工作中,我们开发了一种新的元启发式算法,将粒子群优化(PSO)技术和llm结合起来进行快速优化。我们的方法有一些独特的特点:它可以在少量迭代(例如,通常是10-20次迭代)中收敛,从而大大降低昂贵的LLM使用成本;由于它的简单和高效,它可以很容易地应用于执行多种任务;最重要的是,它不需要太依赖于初始提示的质量,因为它可以通过基于大量现有数据的更有效的学习来改进提示。为了评估所提出的方法,我们对几种类型的NLP数据集进行了一系列实验,并将它们与其他数据集进行了比较。结果强调了将元启发式搜索算法与llm相结合对于快速优化的重要性,证明了所提出的方法可以用于提高llm的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Expert Systems
Expert Systems 工程技术-计算机:理论方法
CiteScore
7.40
自引率
6.10%
发文量
266
审稿时长
24 months
期刊介绍: Expert Systems: The Journal of Knowledge Engineering publishes papers dealing with all aspects of knowledge engineering, including individual methods and techniques in knowledge acquisition and representation, and their application in the construction of systems – including expert systems – based thereon. Detailed scientific evaluation is an essential part of any paper. As well as traditional application areas, such as Software and Requirements Engineering, Human-Computer Interaction, and Artificial Intelligence, we are aiming at the new and growing markets for these technologies, such as Business, Economy, Market Research, and Medical and Health Care. The shift towards this new focus will be marked by a series of special issues covering hot and emergent topics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信