{"title":"AIPO: Automatic Instruction Prompt Optimization by model itself with “Gradient Ascent”","authors":"Kyeonghye Park, Daeshik Kim","doi":"10.1016/j.csl.2025.101889","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) can perform a variety of tasks such as summarization, translation, and question answering by generating answers with user input prompt. The text that is used as input to the model, including instruction, is called input prompt. There are two types of input prompt: zero-shot prompting provides a question with no examples, on the other hand, few-shot prompting provides a question with multiple examples. The way the input prompt is set can have a big impact on the accuracy of the model generation. The relevant research is called prompt engineering. Prompt engineering, especially prompt optimization is used to find the optimal prompts optimized for each model and task. Manually written prompts could be optimal prompts, but it is time-consuming and expensive. Therefore, research is being conducted on automatically generating prompts that are as effective as human-crafted ones for each task. We propose <em>Automatic Instruction Prompt Optimization</em> (AIPO), which allows the model to generate an initial prompt directly through instruction induction when given a task in a zero-shot setting and then improve the initial prompt to optimal prompt for model based on the “gradient ascent” algorithm. With the final prompt generated by AIPO, we achieve more accurate generation than manual prompt on benchmark datasets regardless of the output format.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"96 ","pages":"Article 101889"},"PeriodicalIF":3.4000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230825001147","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Large language models (LLMs) can perform a variety of tasks such as summarization, translation, and question answering by generating answers with user input prompt. The text that is used as input to the model, including instruction, is called input prompt. There are two types of input prompt: zero-shot prompting provides a question with no examples, on the other hand, few-shot prompting provides a question with multiple examples. The way the input prompt is set can have a big impact on the accuracy of the model generation. The relevant research is called prompt engineering. Prompt engineering, especially prompt optimization is used to find the optimal prompts optimized for each model and task. Manually written prompts could be optimal prompts, but it is time-consuming and expensive. Therefore, research is being conducted on automatically generating prompts that are as effective as human-crafted ones for each task. We propose Automatic Instruction Prompt Optimization (AIPO), which allows the model to generate an initial prompt directly through instruction induction when given a task in a zero-shot setting and then improve the initial prompt to optimal prompt for model based on the “gradient ascent” algorithm. With the final prompt generated by AIPO, we achieve more accurate generation than manual prompt on benchmark datasets regardless of the output format.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.