Yuanzhao Zhai, Tingkai Yang, Kele Xu, Feng Dawei, Cheng Yang, Bo Ding, Huaimin Wang
{"title":"通过步骤级 Q 值模型增强 LLM 代理的决策能力","authors":"Yuanzhao Zhai, Tingkai Yang, Kele Xu, Feng Dawei, Cheng Yang, Bo Ding, Huaimin Wang","doi":"arxiv-2409.09345","DOIUrl":null,"url":null,"abstract":"Agents significantly enhance the capabilities of standalone Large Language\nModels (LLMs) by perceiving environments, making decisions, and executing\nactions. However, LLM agents still face challenges in tasks that require\nmultiple decision-making steps. Estimating the value of actions in specific\ntasks is difficult when intermediate actions are neither appropriately rewarded\nnor penalized. In this paper, we propose leveraging a task-relevant Q-value\nmodel to guide action selection. Specifically, we first collect decision-making\ntrajectories annotated with step-level Q values via Monte Carlo Tree Search\n(MCTS) and construct preference data. We then use another LLM to fit these\npreferences through step-level Direct Policy Optimization (DPO), which serves\nas the Q-value model. During inference, at each decision-making step, LLM\nagents select the action with the highest Q value before interacting with the\nenvironment. We apply our method to various open-source and API-based LLM\nagents, demonstrating that Q-value models significantly improve their\nperformance. Notably, the performance of the agent built with\nPhi-3-mini-4k-instruct improved by 103% on WebShop and 75% on HotPotQA when\nenhanced with Q-value models, even surpassing GPT-4o-mini. Additionally,\nQ-value models offer several advantages, such as generalization to different\nLLM agents and seamless integration with existing prompting strategies.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Decision-Making for LLM Agents via Step-Level Q-Value Models\",\"authors\":\"Yuanzhao Zhai, Tingkai Yang, Kele Xu, Feng Dawei, Cheng Yang, Bo Ding, Huaimin Wang\",\"doi\":\"arxiv-2409.09345\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Agents significantly enhance the capabilities of standalone Large Language\\nModels (LLMs) by perceiving environments, making decisions, and executing\\nactions. However, LLM agents still face challenges in tasks that require\\nmultiple decision-making steps. Estimating the value of actions in specific\\ntasks is difficult when intermediate actions are neither appropriately rewarded\\nnor penalized. In this paper, we propose leveraging a task-relevant Q-value\\nmodel to guide action selection. Specifically, we first collect decision-making\\ntrajectories annotated with step-level Q values via Monte Carlo Tree Search\\n(MCTS) and construct preference data. We then use another LLM to fit these\\npreferences through step-level Direct Policy Optimization (DPO), which serves\\nas the Q-value model. During inference, at each decision-making step, LLM\\nagents select the action with the highest Q value before interacting with the\\nenvironment. We apply our method to various open-source and API-based LLM\\nagents, demonstrating that Q-value models significantly improve their\\nperformance. Notably, the performance of the agent built with\\nPhi-3-mini-4k-instruct improved by 103% on WebShop and 75% on HotPotQA when\\nenhanced with Q-value models, even surpassing GPT-4o-mini. Additionally,\\nQ-value models offer several advantages, such as generalization to different\\nLLM agents and seamless integration with existing prompting strategies.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09345\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09345","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enhancing Decision-Making for LLM Agents via Step-Level Q-Value Models
Agents significantly enhance the capabilities of standalone Large Language
Models (LLMs) by perceiving environments, making decisions, and executing
actions. However, LLM agents still face challenges in tasks that require
multiple decision-making steps. Estimating the value of actions in specific
tasks is difficult when intermediate actions are neither appropriately rewarded
nor penalized. In this paper, we propose leveraging a task-relevant Q-value
model to guide action selection. Specifically, we first collect decision-making
trajectories annotated with step-level Q values via Monte Carlo Tree Search
(MCTS) and construct preference data. We then use another LLM to fit these
preferences through step-level Direct Policy Optimization (DPO), which serves
as the Q-value model. During inference, at each decision-making step, LLM
agents select the action with the highest Q value before interacting with the
environment. We apply our method to various open-source and API-based LLM
agents, demonstrating that Q-value models significantly improve their
performance. Notably, the performance of the agent built with
Phi-3-mini-4k-instruct improved by 103% on WebShop and 75% on HotPotQA when
enhanced with Q-value models, even surpassing GPT-4o-mini. Additionally,
Q-value models offer several advantages, such as generalization to different
LLM agents and seamless integration with existing prompting strategies.