{"title":"Robust Offline Actor-Critic with On-Policy Regularized Policy Evaluation","authors":"Shuo Cao;Xuesong Wang;Yuhu Cheng","doi":"10.1109/JAS.2024.124494","DOIUrl":null,"url":null,"abstract":"To alleviate the extrapolation error and instability inherent in Q-function directly learned by off-policy Q-learning (QL-style) on static datasets, this article utilizes the on-policy state-action-reward-state-action (SARSA-style) to develop an offline reinforcement learning (RL) method termed robust offline Actor-Critic with on-policy regularized policy evaluation (OPRAC). With the help of SARSA-style bootstrap actions, a conservative on-policy Q-function and a penalty term for matching the on-policy and off-policy actions are jointly constructed to regularize the optimal Q-function of off-policy QL-style. This naturally equips the off-policy QL-style policy evaluation with the intrinsic pessimistic conservatism of on-policy SARSA-style, thus facilitating the acquisition of stable estimated Q-function. Even with limited data sampling errors, the convergence of Q-function learned by OPRAC and the controllability of bias upper bound between the learned Q-function and its true Q-value can be theoretically guaranteed. In addition, the sub-optimality of learned optimal policy merely stems from sampling errors. Experiments on the well-known D4RL Gym-MuJoCo benchmark demonstrate that OPRAC can rapidly learn robust and effective task-solving policies owing to the stable estimate of Q-value, outperforming state-of-the-art offline RLs by at least 15%.","PeriodicalId":54230,"journal":{"name":"Ieee-Caa Journal of Automatica Sinica","volume":"11 12","pages":"2497-2511"},"PeriodicalIF":15.3000,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ieee-Caa Journal of Automatica Sinica","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10759596/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
To alleviate the extrapolation error and instability inherent in Q-function directly learned by off-policy Q-learning (QL-style) on static datasets, this article utilizes the on-policy state-action-reward-state-action (SARSA-style) to develop an offline reinforcement learning (RL) method termed robust offline Actor-Critic with on-policy regularized policy evaluation (OPRAC). With the help of SARSA-style bootstrap actions, a conservative on-policy Q-function and a penalty term for matching the on-policy and off-policy actions are jointly constructed to regularize the optimal Q-function of off-policy QL-style. This naturally equips the off-policy QL-style policy evaluation with the intrinsic pessimistic conservatism of on-policy SARSA-style, thus facilitating the acquisition of stable estimated Q-function. Even with limited data sampling errors, the convergence of Q-function learned by OPRAC and the controllability of bias upper bound between the learned Q-function and its true Q-value can be theoretically guaranteed. In addition, the sub-optimality of learned optimal policy merely stems from sampling errors. Experiments on the well-known D4RL Gym-MuJoCo benchmark demonstrate that OPRAC can rapidly learn robust and effective task-solving policies owing to the stable estimate of Q-value, outperforming state-of-the-art offline RLs by at least 15%.
期刊介绍:
The IEEE/CAA Journal of Automatica Sinica is a reputable journal that publishes high-quality papers in English on original theoretical/experimental research and development in the field of automation. The journal covers a wide range of topics including automatic control, artificial intelligence and intelligent control, systems theory and engineering, pattern recognition and intelligent systems, automation engineering and applications, information processing and information systems, network-based automation, robotics, sensing and measurement, and navigation, guidance, and control.
Additionally, the journal is abstracted/indexed in several prominent databases including SCIE (Science Citation Index Expanded), EI (Engineering Index), Inspec, Scopus, SCImago, DBLP, CNKI (China National Knowledge Infrastructure), CSCD (Chinese Science Citation Database), and IEEE Xplore.