稳健的离线行为批判与政策正则化政策评估

IF 15.3 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Shuo Cao;Xuesong Wang;Yuhu Cheng
{"title":"稳健的离线行为批判与政策正则化政策评估","authors":"Shuo Cao;Xuesong Wang;Yuhu Cheng","doi":"10.1109/JAS.2024.124494","DOIUrl":null,"url":null,"abstract":"To alleviate the extrapolation error and instability inherent in Q-function directly learned by off-policy Q-learning (QL-style) on static datasets, this article utilizes the on-policy state-action-reward-state-action (SARSA-style) to develop an offline reinforcement learning (RL) method termed robust offline Actor-Critic with on-policy regularized policy evaluation (OPRAC). With the help of SARSA-style bootstrap actions, a conservative on-policy Q-function and a penalty term for matching the on-policy and off-policy actions are jointly constructed to regularize the optimal Q-function of off-policy QL-style. This naturally equips the off-policy QL-style policy evaluation with the intrinsic pessimistic conservatism of on-policy SARSA-style, thus facilitating the acquisition of stable estimated Q-function. Even with limited data sampling errors, the convergence of Q-function learned by OPRAC and the controllability of bias upper bound between the learned Q-function and its true Q-value can be theoretically guaranteed. In addition, the sub-optimality of learned optimal policy merely stems from sampling errors. Experiments on the well-known D4RL Gym-MuJoCo benchmark demonstrate that OPRAC can rapidly learn robust and effective task-solving policies owing to the stable estimate of Q-value, outperforming state-of-the-art offline RLs by at least 15%.","PeriodicalId":54230,"journal":{"name":"Ieee-Caa Journal of Automatica Sinica","volume":"11 12","pages":"2497-2511"},"PeriodicalIF":15.3000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust Offline Actor-Critic with On-Policy Regularized Policy Evaluation\",\"authors\":\"Shuo Cao;Xuesong Wang;Yuhu Cheng\",\"doi\":\"10.1109/JAS.2024.124494\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"To alleviate the extrapolation error and instability inherent in Q-function directly learned by off-policy Q-learning (QL-style) on static datasets, this article utilizes the on-policy state-action-reward-state-action (SARSA-style) to develop an offline reinforcement learning (RL) method termed robust offline Actor-Critic with on-policy regularized policy evaluation (OPRAC). With the help of SARSA-style bootstrap actions, a conservative on-policy Q-function and a penalty term for matching the on-policy and off-policy actions are jointly constructed to regularize the optimal Q-function of off-policy QL-style. This naturally equips the off-policy QL-style policy evaluation with the intrinsic pessimistic conservatism of on-policy SARSA-style, thus facilitating the acquisition of stable estimated Q-function. Even with limited data sampling errors, the convergence of Q-function learned by OPRAC and the controllability of bias upper bound between the learned Q-function and its true Q-value can be theoretically guaranteed. In addition, the sub-optimality of learned optimal policy merely stems from sampling errors. Experiments on the well-known D4RL Gym-MuJoCo benchmark demonstrate that OPRAC can rapidly learn robust and effective task-solving policies owing to the stable estimate of Q-value, outperforming state-of-the-art offline RLs by at least 15%.\",\"PeriodicalId\":54230,\"journal\":{\"name\":\"Ieee-Caa Journal of Automatica Sinica\",\"volume\":\"11 12\",\"pages\":\"2497-2511\"},\"PeriodicalIF\":15.3000,\"publicationDate\":\"2024-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ieee-Caa Journal of Automatica Sinica\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10759596/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieee-Caa Journal of Automatica Sinica","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10759596/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

为了缓解在静态数据集上由非政策Q-learning(QL-style)直接学习到的Q函数所固有的外推误差和不稳定性,本文利用政策状态-行动-奖励状态-行动(SARSA-style)开发了一种离线强化学习(RL)方法,称为具有政策正则化政策评估(OPRAC)的稳健离线行动者-批评者(robust offline Actor-Critic)。借助 SARSA 式引导行动,我们共同构建了一个保守的政策上 Q 函数和一个用于匹配政策上和政策外行动的惩罚项,以正则化政策外 QL 式的最优 Q 函数。这自然使非政策 QL 式政策评估具有政策上 SARSA 式的内在悲观保守性,从而有助于获得稳定的估计 Q 函数。即使在数据采样误差有限的情况下,OPRAC 学习到的 Q 函数的收敛性以及学习到的 Q 函数与其真实 Q 值之间的偏差上限的可控性也能从理论上得到保证。此外,学习到的最优策略的次优性仅仅源于采样误差。在著名的 D4RL Gym-MuJoCo 基准上进行的实验表明,由于 Q 值的估计值稳定,OPRAC 可以快速学习到稳健有效的任务解决策略,其性能比最先进的离线 RL 至少高出 15%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Robust Offline Actor-Critic with On-Policy Regularized Policy Evaluation
To alleviate the extrapolation error and instability inherent in Q-function directly learned by off-policy Q-learning (QL-style) on static datasets, this article utilizes the on-policy state-action-reward-state-action (SARSA-style) to develop an offline reinforcement learning (RL) method termed robust offline Actor-Critic with on-policy regularized policy evaluation (OPRAC). With the help of SARSA-style bootstrap actions, a conservative on-policy Q-function and a penalty term for matching the on-policy and off-policy actions are jointly constructed to regularize the optimal Q-function of off-policy QL-style. This naturally equips the off-policy QL-style policy evaluation with the intrinsic pessimistic conservatism of on-policy SARSA-style, thus facilitating the acquisition of stable estimated Q-function. Even with limited data sampling errors, the convergence of Q-function learned by OPRAC and the controllability of bias upper bound between the learned Q-function and its true Q-value can be theoretically guaranteed. In addition, the sub-optimality of learned optimal policy merely stems from sampling errors. Experiments on the well-known D4RL Gym-MuJoCo benchmark demonstrate that OPRAC can rapidly learn robust and effective task-solving policies owing to the stable estimate of Q-value, outperforming state-of-the-art offline RLs by at least 15%.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Ieee-Caa Journal of Automatica Sinica
Ieee-Caa Journal of Automatica Sinica Engineering-Control and Systems Engineering
CiteScore
23.50
自引率
11.00%
发文量
880
期刊介绍: The IEEE/CAA Journal of Automatica Sinica is a reputable journal that publishes high-quality papers in English on original theoretical/experimental research and development in the field of automation. The journal covers a wide range of topics including automatic control, artificial intelligence and intelligent control, systems theory and engineering, pattern recognition and intelligent systems, automation engineering and applications, information processing and information systems, network-based automation, robotics, sensing and measurement, and navigation, guidance, and control. Additionally, the journal is abstracted/indexed in several prominent databases including SCIE (Science Citation Index Expanded), EI (Engineering Index), Inspec, Scopus, SCImago, DBLP, CNKI (China National Knowledge Infrastructure), CSCD (Chinese Science Citation Database), and IEEE Xplore.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信