Enhancing Decision-Making for LLM Agents via Step-Level Q-Value Models

Yuanzhao Zhai, Tingkai Yang, Kele Xu, Feng Dawei, Cheng Yang, Bo Ding, Huaimin Wang
{"title":"Enhancing Decision-Making for LLM Agents via Step-Level Q-Value Models","authors":"Yuanzhao Zhai, Tingkai Yang, Kele Xu, Feng Dawei, Cheng Yang, Bo Ding, Huaimin Wang","doi":"arxiv-2409.09345","DOIUrl":null,"url":null,"abstract":"Agents significantly enhance the capabilities of standalone Large Language\nModels (LLMs) by perceiving environments, making decisions, and executing\nactions. However, LLM agents still face challenges in tasks that require\nmultiple decision-making steps. Estimating the value of actions in specific\ntasks is difficult when intermediate actions are neither appropriately rewarded\nnor penalized. In this paper, we propose leveraging a task-relevant Q-value\nmodel to guide action selection. Specifically, we first collect decision-making\ntrajectories annotated with step-level Q values via Monte Carlo Tree Search\n(MCTS) and construct preference data. We then use another LLM to fit these\npreferences through step-level Direct Policy Optimization (DPO), which serves\nas the Q-value model. During inference, at each decision-making step, LLM\nagents select the action with the highest Q value before interacting with the\nenvironment. We apply our method to various open-source and API-based LLM\nagents, demonstrating that Q-value models significantly improve their\nperformance. Notably, the performance of the agent built with\nPhi-3-mini-4k-instruct improved by 103% on WebShop and 75% on HotPotQA when\nenhanced with Q-value models, even surpassing GPT-4o-mini. Additionally,\nQ-value models offer several advantages, such as generalization to different\nLLM agents and seamless integration with existing prompting strategies.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09345","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Agents significantly enhance the capabilities of standalone Large Language Models (LLMs) by perceiving environments, making decisions, and executing actions. However, LLM agents still face challenges in tasks that require multiple decision-making steps. Estimating the value of actions in specific tasks is difficult when intermediate actions are neither appropriately rewarded nor penalized. In this paper, we propose leveraging a task-relevant Q-value model to guide action selection. Specifically, we first collect decision-making trajectories annotated with step-level Q values via Monte Carlo Tree Search (MCTS) and construct preference data. We then use another LLM to fit these preferences through step-level Direct Policy Optimization (DPO), which serves as the Q-value model. During inference, at each decision-making step, LLM agents select the action with the highest Q value before interacting with the environment. We apply our method to various open-source and API-based LLM agents, demonstrating that Q-value models significantly improve their performance. Notably, the performance of the agent built with Phi-3-mini-4k-instruct improved by 103% on WebShop and 75% on HotPotQA when enhanced with Q-value models, even surpassing GPT-4o-mini. Additionally, Q-value models offer several advantages, such as generalization to different LLM agents and seamless integration with existing prompting strategies.
通过步骤级 Q 值模型增强 LLM 代理的决策能力
通过感知环境、做出决策和执行动作,代理大大增强了独立大型语言模型(LLM)的能力。然而,在需要多个决策步骤的任务中,LLM 代理仍然面临挑战。当中间行动既没有得到适当奖励也没有受到适当惩罚时,要估计具体任务中行动的价值就很困难。在本文中,我们建议利用与任务相关的 Q 值模型来指导行动选择。具体来说,我们首先通过蒙特卡洛树搜索(MCTS)收集注有步骤级 Q 值的决策轨迹,并构建偏好数据。然后,我们使用另一个 LLM,通过步骤级直接策略优化(DPO)来拟合这些偏好,作为 Q 值模型。在推理过程中,LLMagents 会在每个决策步骤中选择 Q 值最高的行动,然后再与环境互动。我们将我们的方法应用于各种开源和基于 API 的 LLMagents,结果表明 Q 值模型显著提高了它们的性能。值得注意的是,使用 Q 值模型增强后,使用 Phi3-mini-4k-instruct 构建的代理在 WebShop 上的性能提高了 103%,在 HotPotQA 上的性能提高了 75%,甚至超过了 GPT-4o-mini。此外,Q 值模型还具有一些优势,如可通用于不同的LLM 代理,并可与现有的提示策略无缝集成。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信